03/04 2025
491
Author|Qijian Editor|Yifan
The Evolutionary Path of OpenAI
How marketable is the title of 'former OpenAI employee'?
On February 25, local time, Business Insider reported that Mira Murati, the former Chief Technology Officer of OpenAI, has officially announced a $1 billion funding round for her new company, Thinking Machines Lab, with a valuation of $9 billion.
Currently, Thinking Machines Lab has not disclosed any timelines or specific details about its products or technologies. The only public information available about the company is its team of over 20 former OpenAI employees and their vision: to build a future where "everyone has access to knowledge and tools, enabling AI to serve people's unique needs and goals."
Mira Murati and Thinking Machines Lab
The capital appeal of OpenAI-affiliated entrepreneurs has sparked a 'snowball effect.' Before Murati, SSI, founded by Ilya Sutskever, the former Chief Scientist of OpenAI, secured a valuation of $30 billion based solely on its OpenAI heritage and a single concept.
Since Musk stepped down from OpenAI in 2018, former OpenAI employees have founded over 30 new companies with a total funding of over $9 billion. These companies have formed a comprehensive ecosystem spanning AI safety (Anthropic), infrastructure (xAI), and vertical applications (Perplexity).
This reminds one of the wave of Silicon Valley entrepreneurship that emerged after the acquisition of PayPal by eBay in 2002, when founders like Musk and Peter Thiel left to form the 'PayPal Mafia.' From this group emerged legendary companies like Tesla, LinkedIn, and YouTube. Similarly, the departing employees of OpenAI are forming their own 'OpenAI Mafia.'
However, the 'OpenAI Mafia's' script is more aggressive: while the 'PayPal Mafia' took a decade to build two companies worth hundreds of billions of dollars, the 'OpenAI Mafia' has spawned five companies with valuations of over $10 billion in just two years after the launch of ChatGPT. Among them, Anthropic is valued at $61.5 billion, Ilya Sutskever's SSI at $30 billion, and Musk's xAI at $24 billion. It is highly likely that a $100 billion unicorn will emerge from the 'OpenAI Mafia' within the next three years.
The new wave of 'talent fission' sparked by the 'OpenAI Mafia' is impacting the entire Silicon Valley and even reshaping the global AI landscape.
The Evolutionary Path of OpenAI
Out of the 11 co-founders of OpenAI, only Sam Altman and Wojciech Zaremba, head of the language and code generation team, remain employed.
2024 marked a peak in departures from OpenAI. During this year, notable exits included Ilya Sutskever (May 2024) and John Schulman (August 2024). The OpenAI safety team shrank from 30 to 16 members, a reduction of 47%. Key figures such as Chief Technology Officer Mira Murati and Chief Research Officer Bob McGrew left the executive ranks. Within the technical team, core talents like Alec Radford, the chief designer of the GPT series, and Tim Brooks, head of Sora (who joined Google), departed. Deep learning expert Ian Goodfellow joined Google, while Andrej Karpathy left a second time to found an education company.
'Together, we are a blazing fire; apart, we are stars scattered across the sky.'
Over 45% of the core technical staff who joined OpenAI before 2018 have chosen to strike out on their own, and these new 'portals' have disassembled and reassembled OpenAI's technology gene pool into three strategic groups.
The first is the 'lineage force' that continues the OpenAI genes, a group of ambitious entrepreneurs aiming to create OpenAI 2.0.
Mira Murati's Thinking Machines Lab has almost entirely transplanted the R&D architecture of OpenAI: John Schulman is responsible for the reinforcement learning framework, Lilian Weng leads the AI safety system, and even the neural architecture diagram of GPT-4 is directly used as the technical blueprint for new projects.
Their 'Declaration of Open Science' directly addresses OpenAI's recent trend towards closedness, planning to create a 'more transparent AGI development path' through the continuous publication of technical blogs, papers, and codes. This has also triggered some chain reactions in the AI industry: three top researchers from Google DeepMind left with the Transformer-XL architecture to join.
On the other hand, Ilya Sutskever's Safe Superintelligence Inc. (SSI) has chosen a different path. Sutskever co-founded the company with two other researchers, Daniel Gross and Daniel Levy, abandoning all short-term commercialization goals to focus on building 'irreversibly safe superintelligence'—a near-philosophical technical framework. Shortly after the company's establishment, institutions like a16z and Sequoia Capital decided to invest $1 billion to support Sutskever's vision.
Ilya Sutskever and SSI
Another faction comprises 'disruptors' who left before ChatGPT.
Dario Amodei's Anthropic has evolved from being an 'OpenAI opposition' to the most dangerous competitor. Its Claude 3 series of models is on par with GPT-4 in multiple tests. Additionally, Anthropic has established an exclusive partnership with Amazon AWS, indicating that it is gradually eroding OpenAI's foundation in computing power. The chip technology jointly developed by Anthropic and AWS may further weaken OpenAI's bargaining power in NVIDIA GPU procurement.
Another representative figure in this faction is Musk. Although Musk left OpenAI in 2018, some of the founding members of xAI, the company he founded, have worked at OpenAI, including Igor Babuschkin and Kyle Kosic, who later returned to OpenAI. With Musk's strong resources, xAI poses a threat to OpenAI in terms of talent, data, and computing power. By integrating real-time social data streams from Musk's X platform, xAI's Grok-3 can instantly capture hot events on the X platform to generate answers, while ChatGPT's training data is as of 2023, with a significant gap in timeliness. This data loop is difficult for OpenAI to replicate, relying on the Microsoft ecosystem.
However, Musk's positioning of xAI is not to disrupt OpenAI but to retrieve its original mission. xAI adheres to a strategy of 'maximum open source,' such as open-sourcing the Grok-1 model under the Apache 2.0 license, attracting global developers to participate in ecosystem construction. This contrasts sharply with OpenAI's recent tendency towards closed source (e.g., GPT-4 only provides API services).
The third faction comprises 'breakers' who are reshaping industrial logic.
Perplexity, founded by Aravind Srinivas, a former research scientist at OpenAI, is one of the first companies to use large AI models to transform search engines. By directly generating answers with AI instead of displaying link lists on search pages, Perplexity now handles over 20 million searches daily and has raised over $500 million in funding (valued at $9 billion).
Adept was founded by David Luan, former Engineering Vice President at OpenAI, who participated in technical research on language, supercomputing, reinforcement learning, and safety and policy formulation for projects like GPT-2, GPT-3, CLIP, and DALL-E. Adept focuses on developing AI Agents, aiming to help users automate complex tasks (such as generating compliance reports and design drawings) by combining large models with tool invocation capabilities. Its ACT-1 model can directly operate office software, Photoshop, etc. Currently, the company's core founding team, including David Luan, has switched to Amazon's AGI team.
Covariant is a $1 billion embodied intelligence startup. Its founding team comes from the disbanded robotics team at OpenAI, with technical genes rooted in GPT model development experience. It focuses on developing basic models for robots, aiming to achieve autonomous robot operation through multimodal AI, particularly in warehouse logistics automation. However, three 'OpenAI Mafia' members from Covariant's core founding team—Pieter Abbeel, Peter Chen, and Rocky Duan—have all joined Amazon.
Some 'OpenAI Mafia' startups
Source: Public information, compiled by Qijian
The transition of AI technology from a 'tool attribute' to a 'productive factor' has given rise to three types of industrial opportunities: replacement scenarios (e.g., disrupting traditional search engines), incremental scenarios (e.g., intelligent transformation of manufacturing), and reconstructive scenarios (e.g., breakthroughs in life sciences). The common features of these scenarios are their potential to build data flywheels (user interaction data feeding back into models), deep interaction with the physical world (robot motion data/biological experiment data), and gray spaces for ethical regulation.
The technology spillover from OpenAI is providing the underlying impetus for this industrial transformation. Its early open-source strategy (e.g., partial open-source of GPT-2) created a 'dandelion effect' of technology diffusion, but as technological breakthroughs entered deeper waters, closed-source commercialization became an inevitable choice.
This contradiction has given rise to two phenomena: on the one hand, departing talents are migrating technologies such as Transformer architectures and reinforcement learning to vertical scenarios (e.g., manufacturing, biotech), building barriers through scenario data; on the other hand, giants are achieving technology positioning through talent acquisitions, forming a closed loop of 'technology harvesting.'
When Moats Become Watersheds
The 'OpenAI Mafia' is surging ahead, while the old employer, OpenAI, is 'struggling.'
In terms of technology and products, the release date of GPT-5 has been repeatedly delayed, and the mainstream ChatGPT product is widely perceived by the market as failing to keep up with industry development in terms of innovation speed.
In the market, newcomer DeepSeek has gradually caught up with OpenAI, with model performance approaching that of ChatGPT but at only 5% of the training cost of GPT-4. This low-cost replication path is disintegrating OpenAI's technological barriers.
However, a significant part of the rapid growth of the 'OpenAI Mafia' can be attributed to internal conflicts within OpenAI.
The core research team at OpenAI can be said to be in disarray, with only Sam Altman and Wojciech Zaremba remaining of the 11 co-founders, and 45% of core researchers having left.
Wojciech Zaremba
Co-founder Ilya Sutskever left to found SSI, Chief Scientist Andrej Karpathy publicly shared his experience optimizing Transformers, and Tim Brooks, head of the Sora video generation project, joined Google DeepMind. Within the technical team, more than half of the authors of early GPT versions have left, with most joining the ranks of OpenAI's competitors.
Meanwhile, according to data compiled by Lightcast, which tracks job postings, OpenAI's own hiring focus seems to have changed. In 2021, 23% of the company's job postings were for general research positions. By 2024, general research accounted for only 4.4% of its job postings, which also reflects the changing status of scientific research talent at OpenAI.
The organizational cultural conflict brought about by the commercial transformation has become increasingly apparent. While the workforce has expanded by 225% in three years, the early hacker spirit is gradually being replaced by a KPI system, with researchers being 'forced to shift from exploratory research to product iteration.'
This strategic swing has put OpenAI in a double bind: it needs to continuously produce groundbreaking technologies to maintain its valuation while also facing competitive pressure from former employees rapidly replicating its achievements using its methodologies.
The key to winning in the AI industry lies not in parameter breakthroughs in the lab but in who can inject technological genes into the capillaries of the industry—reconstructing the underlying logic of the business world in the answer streams of search engines, the motion trajectories of robotic arms, and the molecular dynamics of biological cells.
Is Silicon Valley Fragmenting OpenAI?
The rapid rise of the "OpenAI Mafia" and the "PayPal Mafia" is largely attributed to the laws of California.
Since California legislated to ban non-compete agreements in 1872, its unique legal environment has become a catalyst for innovation in Silicon Valley. According to Section 16600 of the California Business and Professions Code, any clause that restricts professional freedom is invalid. This institutional design has directly promoted the free flow of technical talent.
The average tenure of a programmer in Silicon Valley is only 3-5 years, far less than in other technology hubs. This high frequency of mobility creates a "knowledge spillover" effect. Take Fairchild Semiconductor as an example; its former employees founded 12 semiconductor giants such as Intel and AMD, laying the foundation for Silicon Valley's industry.
The law banning non-compete agreements may seem insufficient in protecting innovative companies, but it actually promotes innovation. The mobility of technical personnel accelerates the diffusion of technology and lowers the threshold for innovation.
In 2024, the Federal Trade Commission (FTC) of the United States predicts that after the comprehensive ban on non-compete agreements in April 2024, the innovation vitality of the United States will be further unleashed. In the first year of policy implementation, there may be an increase of 8,500 new enterprises, a surge of 17,000-29,000 patents, and 3,000-5,000 new patents. In the next 10 years, the annual growth rate of patents will be 11-19%.
Capital is also a significant driver behind the rise of the OpenAI Mafia.
Silicon Valley commands over 30% of the venture capital in the United States, with institutions like Sequoia Capital and Kleiner Perkins Caufield & Byers establishing a comprehensive financing continuum, spanning from seed rounds to IPOs. This capital-intensive approach fosters a dual-edged sword effect.
Firstly, capital serves as the catalyst for innovation. Angel investors contribute not merely funding but also facilitate industry resource integration. Uber, for instance, commenced with a mere $200,000 in seed funding from its founders and just three registered taxis. Subsequent to receiving $1.25 million in angel investment, it embarked on a rapid financing trajectory, culminating in a valuation of $40 billion by 2015.
Venture capital's sustained focus on the technology sector has also propelled its evolution. Sequoia Capital's investments in Apple (1978) and Oracle (1984) underscored its influence in semiconductors and computing. In 2020, it deepened its commitment to artificial intelligence, participating in cutting-edge projects like OpenAI. International players like Microsoft have invested billions in AI, accelerating the commercialization of generative AI technology from years to months.
Capital further endows innovative companies with greater resilience. The swiftness with which accelerators discard failed projects is as crucial as their ability to nurture successful ones. According to startuptalky, the global startup failure rate stands at 90%, with Silicon Valley's rate at 83%. Despite the challenges, in the venture capital ecosystem, failure experiences swiftly transform into nourishment for new ventures.
Image Source: startuptalky.com
However, capital has also subtly altered the development trajectory of these innovative enterprises.
Top AI projects attain billion-dollar valuations even before product launch, indirectly making it exponentially harder for smaller, medium-sized innovative teams to secure resources. This structural imbalance is most evident in regional distribution. Dealroom's research reveals that the venture capital received in a single quarter in the U.S. Bay Area ($24.7 billion) rivals the combined total of the top 2-5 global venture capital hubs (London, Beijing, Bangalore, Berlin). While emerging markets like India have witnessed a 133% surge in funding, 97% of these funds flow to "unicorn" companies valued at over $1 billion.
Additionally, capital exhibits a strong "path dependence," favoring domains with quantifiable returns, which leaves many emerging basic scientific innovations struggling for robust financial backing. In the realm of quantum computing, Guo Guoping, the founder of Origin Quantum, sold his house to fund his venture due to initial capital constraints. Guo's first round of funding occurred in 2015, a time when China's total scientific research investment was less than 2.2% of GDP, with basic research funding accounting for only 4.7% of R&D investment.
It's not merely a lack of support; large capital also lures top talent with the promise of substantial remuneration, locking in CTO-level salaries in startups at seven figures (USD for American companies, CNY for Chinese ones), fostering a cycle of "giants monopolizing talent - capital chasing giants."
However, the significant pre-valuation of these "OpenAI Mafia" companies entails certain risks.
Both Mira Murati's and Ilya Sutskever's companies secured billions in funding based solely on an idea. This stems from the trust premium accorded to OpenAI's elite team's technical prowess but carries risks - the sustainability of AI technology's long-term exponential growth and the formation of monopolistic barriers by vertical scenario data. When these risks confront real challenges (like slowed multimodal model breakthroughs and surging industry data acquisition costs), capital overheating might trigger industry reshuffling.
References: