AI Expert Tian Yuandong's New Startup Raises $650 Million in Funding, With Investors Including Jensen Huang and Lisa Su Scrambling to Invest

05/15 2026 499

They Aim to Create Self-Evolving AI

Author | Peilin

Editor | Mufeng

Click to Listen to the Podcast Version of This Article

AI expert Tian Yuandong announces his entrepreneurial venture, with investors including Jensen Huang and Lisa Su vying to invest.

On May 14, Recursive, co-founded by Tian Yuandong, announced the completion of $650 million in funding, with the company's valuation reaching $4.65 billion. The investor list is extremely impressive, led by GV (Google Ventures) and Greycroft, with participation from AMD Ventures and NVIDIA.

It's worth noting that the company has been in existence for less than half a year and has a team size of fewer than 30 people. However, the eight co-founders almost cover the most core group of researchers in the AI field over the past decade: key members from institutions such as Meta FAIR, OpenAI, Google DeepMind, Salesforce AI, and Uber AI have all come together.

The driving force behind Recursive, Tian Yuandong, boasts top-tier academic credentials from Shanghai Jiao Tong University and Carnegie Mellon University. As a core researcher at Meta, he participated in the evolution of foundational models, later experienced internal turmoil at Meta, and eventually left the tech giant to start his own venture.

What they aim to achieve this time is extremely bold: enabling AI to research AI by itself, allowing AI to automatically optimize AI, ultimately forming recursive self-improvement. To some extent, this is no longer a traditional large model startup but a bet on the next generation of AI paradigms.

Letting AI Upgrade Itself

The name Recursive already explicitly states its core direction: Recursive Self-Improvement.

In simple terms, they aim to build an AI system capable of automatically identifying problems, designing experiments, modifying code, verifying results, and continuing to optimize itself. Traditionally, AI progress has been driven by human researchers: researchers propose new ideas, engineers write code, teams train models, evaluate results, and then proceed to the next iteration.

However, Recursive wants to hand over a significant portion of this cycle directly to AI itself.

In their vision, AI will not only answer questions or assist in writing code but will also be able to proactively identify its own capability shortcomings, automatically design benchmarks, rewrite its own training and optimization processes, and ultimately form a continuously self-evolving system.

A quote from CEO Richard Socher succinctly summarizes this logic:

"AI is itself code, and now AI can already write code."

From Recursive's perspective, the most critical issue in the era of large models has changed. In the past, the industry believed in the Scaling Law—that as long as more parameters, more data, and more computational power were continuously stacked, model capabilities would continuously improve (continuously improve). But now, more and more people are beginning to realize that marginal returns are declining.

Training costs are rising higher and higher, yet improvements in model capabilities are slowing down.

Now, more and more people are beginning to realize that simply relying on stacking more parameters, more data, and more computational power makes it difficult to achieve the same rapid improvements as before. The logic of "brute force through scale" is approaching its limits. Thus, Silicon Valley is beginning to search for the next growth path beyond large models.

Recursive is betting on one of the most radical answers: letting AI become its own researcher.

They have even set an extremely ambitious goal: first, train an AI system with the capabilities of 50,000 Ph.D.s to automate AI scientific research itself. Then, expand this system into fields such as drug discovery, biological research, battery materials, and nuclear fusion.

In a sense, what they aim to achieve is not just a more powerful large model but AI that can strengthen itself.

And the capital markets are clearly very supportive.

The company completed $650 million in funding and reached a valuation of $4.65 billion immediately after its official announcement. Led by GV and Greycroft, with participation from AMD Ventures and NVIDIA, this funding scale is exceptionally high in the history of AI entrepreneurship.

The reasons behind this are straightforward.

Over the past year, the AI industry has already begun a collective shift toward automated scientific research.

OpenAI is developing the Automated AI Researcher; Anthropic is also advancing AI automation research systems; DeepMind has introduced AlphaEvolve; and Darwin Gödel Machine, in which Jeff Clune is involved, has already begun attempting to let AI automatically modify its own code and verify its effectiveness using benchmarks.

The entire industry is trying to answer a question:

If AI can already write code, how far is it from "upgrading itself"?

Recursive is currently one of the most radical teams in this pursuit.

After Meta's Layoffs, All of Silicon Valley Is Vying for Tian Yuandong

Within Recursive's team, Tian Yuandong is one of the most closely watched figures in the Chinese AI community.

This is because his background is highly representative.

Tian Yuandong was born in Shanghai and completed his undergraduate and master's degrees at Shanghai Jiao Tong University before pursuing a Ph.D. at the Robotics Institute at Carnegie Mellon University, which he earned in 2013.

Subsequently, he joined Meta's FAIR (Fundamental AI Research) division, where he remained for nearly a decade, eventually becoming a research scientist director.

His research directions are highly technical, covering reinforcement learning, multi-agent learning, large model inference, planning and decision-making, and theoretical analysis of deep learning. He has long served as an area chair for top conferences such as NeurIPS and ICML.

More critically, he is not just a theorist.

Over the past few years, Tian Yuandong has been deeply involved in research related to the Llama series, as well as core directions such as world models, inference optimization, long-sequence acceleration, and low-cost training. He also led the ELF OpenGo project, which achieved AlphaZero-style Go training using a single GPU and defeated professional players.

Later projects like StreamingLLM and GaLore were also closely related to his research trajectory.

However, what truly made Tian Yuandong stand out in Silicon Valley was a round of layoffs.

In 2025, Meta's internal AI organization underwent dramatic restructuring. To push forward the release of Llama 4, the FAIR team was forcibly reassigned to support the GenAI department. According to Tian Yuandong's later recollections, they had to set aside their original research work and instead take on a large amount of "dirty work" such as post-training and bug fixing.

Then, after the completion of Llama 4.5 training, Meta's AI department began massive layoffs, with Tian Yuandong's team being hit hard.

The most dramatic part was that he had just posted on X, "I and some team members have been affected by layoffs," and the next second, researchers from companies like OpenAI, xAI, Anthropic, Google DeepMind, and NVIDIA began flooding the comment section with job offers. Silicon Valley AI startups were practically lining up from the comment section all the way to France.

This was because everyone in the industry knew that what made Tian Yuandong truly scarce was not just his technical abilities but his understanding of both foundational research and large model engineering.

These are the rarest individuals in today's AI industry.

Many researchers only understand papers, and many engineering teams only understand training, but Tian Yuandong works on both reinforcement learning theory and large model inference optimization while also participating in large-scale model system engineering.

More importantly, he has a very clear vision for the future direction of AI.

He does not fully believe in the brute-force Scaling Law but places greater emphasis on inference efficiency, explainability, and underlying theoretical logic. He has publicly stated that current large models have only scratched the surface of intelligence and that true human insight and creativity remain beyond AI's grasp.

Recursive's approach—continuous space reasoning, inference efficiency, open-ended evolution, and explainability—almost perfectly aligns with his research interests.

Thus, after leaving Meta, Tian Yuandong ultimately chose not to join any major company but to start his own venture.

To some extent, this is also a common trend among today's top AI researchers in Silicon Valley: no longer content to be cogs in a large corporation, they are directly betting on the next generation of AI paradigms.

Eight Co-Founders Form an AI All-Star Lineup

Recursive has brought together some of the most important researchers in the AI field over the past decade into a single company.

CEO Richard Socher is one of the most central figures. He was once Andrew Ng's Ph.D. student at Stanford and one of the earliest representatives of the "neural network faction" in NLP. Early on, he pushed neural networks into the mainstream of NLP, later founded MetaMind (acquired by Salesforce), and then created the AI search engine You.com.

Today, many people are familiar with the concept of "Prompt Engineering," and Socher was one of the earliest to systematically propose it.

Another co-founder, Caiming Xiong, has been Socher's long-time collaborator. He rose to senior vice president at Salesforce AI, where he long oversaw Applied AI and large model-related businesses, and participated in research on controllable text generation like CTRL.

Josh Tobin from OpenAI is one of the key members in OpenAI's robotics and Agents directions. He participated in the famous AI Rubik's Cube-solving robot hand project and later founded Gantry, a machine learning infrastructure company.

Then there's Tim Shi, also known as Shi Tianlin. This Chinese entrepreneur, a graduate of Tsinghua University's Yao Class, is equally legendary. He ranked first in his undergraduate class, won the "Yao Class Gold Award," and later entered Stanford for a Ph.D. in AI but dropped out in 2017 to co-found the AI customer service company Cresta, applying Transformers to real-time customer service Agents ahead of time.

DeepMind's Tim Rocktäschel has long researched open-ended intelligence, world models, and self-improvement systems. He was also a key researcher on the Genie world model project.

Alexey Dosovitskiy is one of the most heavyweight figures in computer vision. He co-proposed the Vision Transformer (ViT), and his paper "An Image is Worth 16x16 Words" almost single-handedly changed the technical trajectory of the entire computer vision field.

Jeff Clune is also a key figure. He has long researched open-ended evolution, AI-generating algorithms, and AI safety, and participated in the Darwin Gödel Machine project. The core of this work is to let AI automatically modify its own code and verify whether optimizations are effective.

If you piece together the research directions of these individuals, you'll find that Recursive's layout (layout) almost completely covers the most critical paths in today's AI field:

Reinforcement learning, world models, Agents, Vision Transformers, open-ended evolution, self-improvement, inference optimization, and AI safety.

More importantly, this group ultimately reached the same conclusion: the next step for AI is not just to be bigger but to be more autonomous.

Thus, the emergence of Recursive is not just an ordinary startup—it is more like a collective bet by Silicon Valley's top AI researchers, wagering that "AI researching AI by itself" will be the starting point for the next leap in capabilities.

And Jensen Huang, Lisa Su, and a host of top-tier capital have already placed their bets.

References:

"Just Now, Tian Yuandong, Who Was Fiercely Competed for by the Entire AI Circle, Officially Announces His Startup—Jensen Huang Invested Too," APPSO

"Tian Yuandong's AI Startup Valued at $4.65 Billion: Old Yellow and Su Ma Invested, Tsinghua Yao Class's Shi Tianlin Is Also a Partner," Quantum Bit

"Jensen Huang and Lisa Su Invest in Tian Yuandong!," Zhidx

"After Tian Yuandong Was Laid Off, New Job Offers Were Lined Up All the Way to France! It Turns Out He Was Discarded After Training Llama 4.5," Quantum Bit

"Renowned Researchers Join $4 Billion Project to Build Self-Improving AI," The New York Times

"Betting on Their Own Unemployment! Tian Yuandong's Eight-Person Team Raises $610 Million, Entering the 'Recursive Evolution' Race," New Intelligence Element

"LeCun Exposes Meta's Cheating on Benchmarks; Tian Yuandong: I Didn't Expect This Outcome," Quantum Bit

THE END

Copyright and Disclaimer

1. Content Copyright: Except for quoted public data, policies, and cases, all content herein is original. Professional data is sourced from authorized databases and government websites, while cases are compiled from real events.

2. Image Licensing: Some images herein are owned or officially licensed, while others are AI-generated. For network images with unclear copyright, ownership remains with the original authors, and any infringement will be removed upon notification.

3. Republishing Guidelines: Unauthorized republication is prohibited. Republishing requires retaining the full source and author attribution.

4. Liability Disclaimer: This article is a commercial figure observation and industry commentary compiled by the author based on public information. The content is for reference only and does not constitute professional advice. Risks arising from use are borne by the user. Chanyelian reserves the final right of interpretation for this article.

Solemnly declare: the copyright of this article belongs to the original author. The reprinted article is only for the purpose of spreading more information. If the author's information is marked incorrectly, please contact us immediately to modify or delete it. Thank you.