04/22 2026
497

Author | An Fujian
2.5 billion raised in funding within one month, with valuation doubling.
Behind these figures lies a collective bet by the capital market on a judgment: world models are becoming the operating system of the embodied AI era, and the first company to achieve industrialization will define the ceiling of this track (translation: field/sector).
On April 16, DataMi announced the completion of a 1.5 billion RMB Series B1 funding round. Combined with the nearly 1 billion RMB Pre-B round completed in March, this world model company, founded just three years ago, has raised 2.5 billion in funding within a month. Its valuation has also surged from 5 billion to over 10 billion, making it China's first unicorn in the world model field.
Behind the frenzy of capital bets lies the transition of world models from concept to industrial implementation.
The fundamental reason why DataMi has attracted overwhelming investments from top state-owned funds, industrial capital, and existing shareholders is:
DataMi is not only the earliest domestic company to deploy world models but also the first globally to move world models from the lab to mass production in real robots and autonomous driving.
Huang Guan, founder and CEO of DataMi, summarizes this strategy as the 'Intelligence Flywheel':
With world models at its core, integrating VLA and reinforcement learning to build a closed loop of data-model-hardware-scenarios, accelerating China's embodied AI towards a GPT-3 moment. 
Capital Intensively Bets on World Models: China's First World Model Unicorn Worth Over 10 Billion Emerges
Major players are increasingly betting on world models.
On April 16, Tencent Hunyuan released and open-sourced the Hunyuan 3D World Model 2.0; on the same day, Alibaba unveiled its world model, HappyOyster.
Previously, NVIDIA introduced the World Foundation Model - COSMOS; Google DeepMind released Genie-3; Tesla also deeply integrated world model technology into its simulation system.
The dense (translation: concentrated/intense) entry of tech giants further confirms the important position of world models as the next-generation AI infrastructure.
As another technological high ground after LLMs, Huawei has also listed world models as the 'first among the top ten technology trends for the future intelligent world by 2035.'
With major players entering the field and giants placing their bets, China's first world model unicorn worth over 10 billion has emerged on the world model track (translation: field/sector).
Since August 2025, DataMi has undergone several rounds of 'turbocharged' accelerated funding, securing hundreds of millions of RMB in financing every one to two months.
In late August 2025, it completed consecutive Pre-A & Pre-A+ rounds totaling hundreds of millions of RMB, with investors including Guangzhou Industrial Investment, Huaqiang Capital, Guozhong Capital, and Zifeng Capital.
In early November, it completed a new round of hundred-million-RMB Series A1 funding, jointly invested by Huawei Hubble and Huakong Fund.
In early December, it secured 200 million RMB in Series A+ funding, with investors including Huakong Fund and Fortune Capital.
In early March 2026, it completed a 1 billion RMB Pre-B round, with investors including chip and automotive industry capitals such as SMIC Poly Capital, Puke Investment, and Linxin Capital.
On April 13, it announced the completion of a 1.5 billion RMB Series B funding round, with investors including a well-known tech giant, multiple national team funds, and Yili Group's CVC Jianling Capital.
In the latest two funding rounds, a 'capital scramble' ensued, with 2.5 billion RMB raised in one month, a valuation exceeding 10 billion, and a 5 billion RMB increase in valuation within the same month, leading the global world model track (translation: field/sector).
Behind the 10 billion valuation, outside attention naturally turns to the founder of DataMi.
The 'post-90s' founder and CEO of DataMi, Huang Guan, hails from prestigious institutions: he earned his undergraduate degree in automation from Huazhong University of Science and Technology, a master's degree from the Chinese Academy of Sciences, and a Ph.D. in automation from Tsinghua University. During this time, he interned at Microsoft Research Asia, engaging in early deep learning research and working alongside scientists in computer vision recognition such as Sun Jian and He Kaiming. He also has work experience at institutions like Samsung China Research Institute.
His subsequent career trajectory reflects a clear path of cognitive ascension:
In 2016, Huang joined Horizon Robotics, focusing on visual AI and serving as the head of visual perception technology;
In 2021, Huang delved deeper into visual perception and entered the autonomous driving field, co-founding PhiGent Robotics;
In 2023, Huang independently founded DataMi, shifting from vision to perception and then to world models, directly targeting the high ground of AI technology: world models.
Currently, DataMi relies on the Intelligent Vision Laboratory at Tsinghua University's Department of Automation. Huang has led his team to win multiple global AI competition championships and published several globally renowned AI achievements.
The core team consists of top researchers from prestigious institutions like Tsinghua University and the Chinese Academy of Sciences, as well as executives and industry experts from renowned companies such as Baidu, Microsoft, and Horizon Robotics. The team possesses world-class expertise in algorithms, data, infrastructure, and other model-related full-stack capabilities. They have published over 200 papers in top AI conferences and journals, led winning teams in dozens of the world's most influential AI competitions, and have a hardware team with extensive experience in mass-producing humanoid robots.
In the past, as a core executive, Huang has cumulatively led or participated in raising over 1 billion RMB in funding, enabling DataMi to swiftly capture the sensitivities of the capital market. 
Leading the Layout of World Models: Implementing Embodied AI Through the 'World Model + VLA + Reinforcement Learning' Approach
The sustained bets by capital are based on judgments about DataMi's technological approach.
To understand this, one must first recognize the true bottlenecks facing embodied AI today.
Embodied foundation models dominated by Vision-Language-Action (VLA) face two insurmountable pain points:
First, the model architecture is inefficient and struggles with complex real-time reasoning;
Second, data collection efficiency in the real physical world is low, and costs are extremely high.
The introduction of world models provides a new pathway to break through these challenges.
World models offer a new system that represents the best path to achieving scaling in embodied AI. Their introduction completely rewrites the rules of the game, transforming the overall system architecture, including world prediction pre-training, visual-action pre-training, post-training, and data systems.
Founded in 2023, DataMi, as China's first technology company systematically deploying world models, has released multiple world models with industry-pioneering and 'globally leading' titles.
This product system can be understood along a main thread:
The GigaWorld series addresses data generation and physical prediction;
The GigaBrain series addresses embodied execution and scenario generalization.
Together, they form the technological foundation of DataMi's embodied AI.
First, making world model-generated data usable.
GigaWorld-0 is the world's first key technology to verify that 'world model-generated data can effectively enhance the performance of real physical robots (VLA).'
Second, enabling world models to move from the lab to real-world scenarios.
GigaWorld-policy, officially released in early March this year, represents the world's first major technological breakthrough in 'action-world model WA' in terms of real-time performance, training efficiency, and success rate.
Finally, achieving first place in global rankings proves its leading capabilities.
GigaWorld-1, an embodied world model in the GigaWorld series, defeated models from Google, NVIDIA, Alibaba, and others in the authoritative WorldArena benchmark for world models, securing the global top spot.
If the GigaWorld series represents the technological high ground occupied by DataMi in the world model field, then the GigaBrain series is a crucial step in extending this high ground to the most cutting-edge scenarios.
Regarding the challenges of implementing embodied AI, there has been a debate in the industry between choosing world models or VLA technology.
The former excels in understanding physical laws and data generation, while the latter is adept at executing complex actions. Each approach has its advantages, but relying solely on either one is insufficient to cover the full complexity of real-world scenarios.
DataMi's answer is to reject this binary choice.
DataMi has demonstrated the possibility of an alternative path for embodied AI models through a synergistic approach of 'world model + VLA + reinforcement learning.'
The world model's strong 'generalization ability' finds solutions for unseen scenarios, while VLA addresses the complexity of tasks, and reinforcement learning ensures accuracy and reliability.
When these three work together, they lead physical AI to achieve a 95% success rate in 90% of scenarios across 100 common tasks. DataMi's embodied foundation model, GigaBrain-0, is the first end-to-end embodied foundation model in China to integrate world model-generated data with real robot operational data.
Subsequently, the GigaBrain series has continued to iterate:
GigaBrain-0.1 achieved the world's first top ranking in large-scale real-world embodied AI evaluations;
GigaBrain-0.5M* is the world's first embodied foundation model to achieve efficient learning and self-evolution through world model-based reinforcement learning.
No matter how capable a model is, it must ultimately be validated by real robots.
Unsatisfied with being just a model company, DataMi has moved from 'models' to 'hardware,' adopting a dual-wheel drive model. This also connect through (translation: connects/integrates) the robot data chain and enhances continuous evolutionary capabilities.
Starting with data collection from real robots and entering real-world scenarios to solve specific problems is essential for training truly effective world models for robots.
In 2026, DataMi is transitioning from technological breakthroughs to 'generalized scenario implementation.'
On January 31, DataMi officially began the first batch of deliveries of its fully self-developed next-generation physical AGI native robot, Maker H01.
More than two months later, DataMi, in collaboration with FAW Tooling and Alibaba Cloud, implemented a full-process solution for embodied AI robots in real industrial manufacturing scenarios, powered by the new GigaBrain-1.
This is China's first full-process embodied AI solution centered on real automotive manufacturing scenarios, covering high-frequency tasks such as box destacking, cross-area transportation, dynamic obstacle avoidance, and precise operations.
Based on real data, it also feeds back into the development of the overall embodied AI foundation model.
Currently, DataMi's general-purpose robot, Maker H01, has been deployed in scenarios including automotive, 3C, warehousing, and home services, with an annual target of delivering a thousand units.
Thus, DataMi has constructed a complete closed-loop system: data collection and annotation, model training, application deployment, and upgrade iteration. Each link (translation: link/stage) reinforces the next. 
Beyond Embodied AI: Building the 'Intelligence Flywheel' of World Models
While embodied AI is DataMi's core battleground, the potential of world models extends far beyond.
The emergence of world models is rewriting the logic of AI implementation:
In the AI 1.0 era, progress largely relied on scenario-driven data loops, improving data quality and quantity to continuously optimize models and application effects.
Under this model, the accumulation of data volume determined the model's performance in specific scenarios, known as the 'data flywheel.' However, its drawback was insufficient generalization ability, struggling when scenarios became complex.
Entering the AI 2.0 era of general intelligence, it is most important to advance both foundation models and commercial applications, creating an intelligent closed loop and forming an 'intelligence flywheel.'
The core logic of the intelligence flywheel is that foundation models possess general physical common sense and understanding capabilities, no longer relying on scenario-specific data but instead using their built-in intelligence to understand new tasks.
Under this logic, world models break through the previous ceiling of model capabilities, and launching popular sectors capable of large-scale commercialization becomes an inevitable choice for creating an intelligent closed-loop flywheel.
Around this intelligence flywheel logic, beyond embodied AI, DataMi has also deeply deployed two frontier sectors for world model implementation: autonomous driving and content generation.
These are not mere business additions but extensions of the same underlying world model capabilities to different scenarios.
Even before entering the embodied AI track (translation: field/sector), leveraging Huang Guan's early experience in visual recognition, DataMi had already entered the autonomous driving field.
DriveDreamer, proposed by DataMi in September 2023, is the world's first autonomous driving world model oriented toward the real world. Invited by NVIDIA as an Oral Presentation, it is one of the most influential papers at ECCV24.
Previously, limited by the distribution of training data, such as rare long-tail scenarios like extreme weather, sudden accidents, and jaywalking pedestrians, scaling autonomous driving was fraught with difficulties.
World models endow AI with the ability to predict physical laws (e.g., gravity, collisions, causality), enabling AI to truly predict the external world like humans.
DriveDreamer4D and ReconDreamer, jointly developed by DataMi, Peking University, Li Auto, and the Institute of Automation, Chinese Academy of Sciences, achieved free-viewpoint reconstruction and generation in autonomous driving scenarios.
For example, based on the world model's video generation capabilities, DriveDreamer4D enhances video rendering effects in complex lane-changing scenarios, making vehicles and lane markings clearer.
DataMi is the first company in China to achieve commercial applications of world models. Currently, through mass production collaborations with multiple OEMs, including Li Auto, XPENG, GAC Group, ECARX, and Horizon Robotics, it has achieved large-scale industrial implementation globally.
In the field of content generation, world models are also transforming the underlying logic.
Within the current cognitive landscape, video generation undoubtedly represents the most prominent and intuitive technological path for world models. As demonstrated by Google's Genie-3, AI is evolving from simple pixel stacking to understanding underlying physical logic.
Just by defining semantic instructions, an engine based on a world model can automatically simulate video clips.
Sora, which once amazed everyone, is built on an end-to-end model, while Jijia Vision takes a different approach. Jijia Vision has launched a new generation of content generation product, the "Yisu Engine," based on a world model.
The Yisu Engine integrates China's first ultra-long-duration Sora-level video generation large model (YISU, which once ranked first globally in VBench evaluations), the world's first real-time 3D world model framework (WonderTurbo, which is over 15 times faster than concurrent work by Stanford and MIT), and the first 3D human generation framework that combines "generation + reconstruction" (HumanDreamer-X), forming a one-stop content creation engine that directly transforms the world model's understanding of physical laws into content productivity.
By continuously iterating the capabilities of the foundational models and refining the industrial layout from the "brain" to the body, it drives large-scale commercial deployment in areas such as embodied intelligence, autonomous driving, and content generation. Furthermore, it uses feedback from applications to drive the continuous evolution of foundational models—the "intelligence flywheel" has already started spinning.
With the goal of sprinting towards the GPT-3 moment for embodied intelligence and reaching the peak of physical AGI, the time may not be far off.