04/27 2026
481

[Chaoshi Business Review/Text]
On April 24, 2026, the global AI industry witnessed a rare "triple launch" event.
In the early hours of the morning, OpenAI officially released GPT-5.5, claiming it to be the "most intelligent and intuitive model to date," bringing the company closer to achieving a "super app."
In the morning, the long-anticipated DeepSeek V4 series made its debut, featuring 1.6 trillion total parameters, a 49 billion activation scale, a 1 million-token context window, and—as later announced—open-source availability. Many exclaimed, "The price disruptor is back." In the afternoon, Meituan opened testing for its next-generation foundational large model, LongCat-2.0-Preview, which also supports a 1 million-token context window and can process millions of words in a single inference. Notably, its parameter scale is comparable to that of V4.

On the same day, the "trillion-parameter club" welcomed two Chinese contenders.
When combined with the news that "Google plans to invest up to $40 billion in Anthropic," this day deserves repeated mention.
On one side are U.S. giants further solidifying their technological dominance and influence. On the other are Chinese tech companies, primarily representing the open-source model. We cannot view both sides through the lens of a simple parameter race, but it is undeniable that both DeepSeek V4 and LongCat-2.0-Preview are comprehensively and deeply adapting to the domestic computing power ecosystem.
It is reported that DeepSeek-V4 has achieved full-stack deep adaptation for mainstream domestic AI chips, including Huawei's Ascend and Cambricon. Meanwhile, LongCat-2.0-Preview's training and inference were completed entirely using domestic computing clusters, making it the only trillion-parameter model trained on domestic cards to date.
Industry media has described them as "training models of equal caliber with a fraction of the resources OpenAI uses."
Objectively, domestic chips still lag behind NVIDIA in single-card absolute performance and cluster interconnect computing power. However, the simultaneous commitment of two Chinese large models to the domestic computing power ecosystem is undoubtedly a milestone—if OpenAI represents "closed-source algorithms + NVIDIA computing power," then the updates from DeepSeek and Meituan's LongCat signify a meaningful step forward for China's AI industry on the path of "open-source + domestic computing power."
"Once mainstream open-source large models achieve large-scale deployment in China's domestic computing power ecosystem, a gap will be torn open in the U.S. chip moat in the AI field," NVIDIA CEO Jensen Huang warned in a recent interview. That concern is now becoming reality.
01 What Does the Simultaneous Entry of Two Chinese Large Models into the "Trillion-Parameter Club" Signify?
The convergence of domestic large models in achieving "computing power autonomy" introduces a notable variable into the AI computing power landscape, which has long been dominated by a few vendors.
Over the past few years, Chinese AI companies have faced not just the isolated challenge of "computing power constraints" but dual constraints of "hardware + software." On the hardware side, NVIDIA has effectively played the role of a "computing power monopolist."
Data shows that its chips account for approximately 90% of the global AI training market and 97% of the AI server market. Under supply-demand imbalances, high-end AI computing chips consistently command a 30% to 70% premium. Huang revealed during a Q4 2025 earnings call that the company's backlog had reached $500 billion, with high-end architectures booked through 2027. Meanwhile, the U.S. continues to restrict advanced chip exports to China, forcing domestic companies to procure performance-limited special products. Both general-purpose large model development and industrial-grade AI deployment face tangible computing power constraints.
On the software side, the trend toward closed-source systems is deepening. OpenAI, Google, Anthropic, and others have fully closed off their core algorithms, training data, and weights, restricting service access in China and prohibiting local deployment and secondary distribution. In 2026, OpenAI formed an alliance with Anthropic and Google to restrict technical distillation by domestic large models.
Faced with these external constraints, building an autonomous and controllable AI ecosystem is no longer a choice but a necessity for domestic companies.

The simultaneous debut of DeepSeek V4 and Meituan's LongCat-2.0-Preview sends a clear signal: domestic chips and the domestic AI ecosystem are achieving a critical leap from "usable" to "user-friendly" in certain cutting-edge scenarios.
More crucially, a virtuous cycle is forming between domestic large models and domestic computing power. The extreme refinement of trillion-parameter models is solidifying the foundation of domestic computing power. It is reported that LongCat-2.0-Preview's training and inference were completed entirely using domestic chips, utilizing 50,000 to 60,000 domestic computing cards—a record for the largest-scale training task using domestic computing power to date.
Conversely, advancements in the domestic chip ecosystem are making models highly cost-effective, rapidly narrowing the gap with foreign closed-source products.
In terms of practical performance, while LongCat-2.0-Preview has received less attention than DeepSeek V4, its capabilities are formidable. Its parameter scale also exceeds one trillion, with actual efficiency ranking among the top tier. For example, it can generate a complex interactive HTML webpage covering the origins and dynastic changes of Chinese history within one minute, delivering not only fluent content and rigorous logic but also visual and coding quality on par with mainstream closed-source models.
"If this path succeeds, it means global developers and enterprises will have more options," noted a senior industry figure. Previously, training trillion-parameter models was an extremely high bar, seen as a realm accessible only to companies with top-tier NVIDIA GPUs. The addition of two Chinese members to the "trillion-parameter club" marks an important watershed for domestic computing power.
02 Why Has Alignment with Domestic Computing Power Become a Consensus?
The breakthrough in domestic computing power hinges on collaboration among model vendors, tech giants, and chip manufacturers. This journey has undergone a profound transformation from passive response to proactive strategy, from isolated breakthroughs to ecosystem-wide consensus.
Before 2022, domestic AI accelerator cards held less than 5% of the market share, with Core technology is subject to human control (core technologies controlled by others) being an unavoidable reality. Computing power supply relied heavily on imports, not only incurring high procurement costs but also facing the risk of supply chain disruptions at any moment. Due to the insurmountable barrier of the CUDA ecosystem, domestic computing power remained marginalized in the industry, forced to chase established overseas technical routes.
However, geopolitical shifts and the introduction of top-level policies like the "Action Plan for High-Quality Development of Computing Infrastructure" have accelerated the translation of "support for domestic computing power" from a slogan into an industry consensus.
IDC's latest report shows that in the 2025 Chinese AI accelerator card market, domestic chip shipments reached 1.65 million units, capturing over 40% of the market share. Market forecasts suggest that by 2026, domestic AI chips led by Huawei's Ascend will surpass 50% market share for the first time.

Domestic companies now support domestic computing power through three representative mainstream models:
The first is the "self-built intelligent computing cluster" model adopted by internet giants like Alibaba, Tencent, and ByteDance. Leveraging their cloud businesses, these companies have large-scale construction (massively constructed) domestic intelligent computing centers, deploying chips from Huawei's Ascend and Cambricon at scale to provide affordable computing power for their own models and third-party developers, lowering industry entry barriers from the supply side.
The second is the "early investment + ecosystem" layout model. Take Meituan as an example: Wang Xing revealed that the company has made sustained, high-intensity investments in AI. "Apart from companies with cloud computing businesses, Meituan likely has the largest AI investment scale among domestic enterprises and has maintained this layout for over three years." Currently, Meituan has built a vast computing matrix around general-purpose GPUs, chip design, semiconductor materials, and edge AI, investing in over 14 semiconductor and smart hardware companies, including Moore Threads, Muxi Corporation (MetaX), and Unisoc.
The third, and noteworthy, model is "software-hardware synergy." During the process of integrating and applying domestic computing power, model vendors engage in continuous interaction and feedback. For instance, to enhance the performance of domestic chips in areas like memory capacity, bandwidth, software ecosystems, and cluster stability, Meituan's AI team rewrote and optimized core operators, developed fully deterministic operators, and designed more "compatible" training frameworks and model structures tailored to domestic hardware characteristics, maximizing the computational potential of domestic chips.
The core lies in "models defining computing power and computing power supporting models." The "pits" navigated and the massive engineering experience accumulated by large model companies in training on domestic computing power directly feed back into the iterative optimization of domestic chips, accelerating ecosystem maturation.
Today, mainstream models like Zhipu's GLM-5, Baidu's ERNIE Bot, Alibaba's Tongyi Qianwen, and Doubao have fully adapted or are in the process of adapting to domestic computing power.
03 The Domestic AI Ecosystem Remains a Formidable Uphill Battle
However, as DeepSeek, LongCat, and others make breakthroughs, China's AI industry must remain sober. Compared to NVIDIA and OpenAI, objective gaps persist in both domestic computing power and large model vendors—far from the time to celebrate.
One detail is that DeepSeek has not entirely abandoned the NVIDIA ecosystem, opting instead for a "dual-stack" strategy parallel to Huawei's Ascend. After all, extreme algorithmic optimizations cannot yet fully bridge gaps in chip interconnect bandwidth, foundational software ecosystems (e.g., CUDA's first-mover advantage), and other physical and ecological layers.
An industry insider provided a precise assessment: "DeepSeek's extreme optimization of memory and activation parameters, its innovative use of the MoE architecture, and its relentless focus on computational efficiency per token are not for showmanship but to patch hardware limitations."
From catching up to partial parity, China's AI industry is destined for a protracted war, but its confidence in the domestic AI ecosystem is growing.
On one hand, China's AI industry has explored a differentiated path combining open-source models, algorithmic innovation, and scenario-driven ecosystems. As the world's largest industrial nation, China boasts the most diverse and extensive real-world scenarios and demands globally—a unique advantage that U.S. companies like OpenAI cannot replicate.
Take Meituan as an example: its nationwide instant delivery network spans over 2,800 cities and counties, accumulating vast data from drone and autonomous vehicle deliveries. These operations cover China's most complex task demands and physical environments, providing a natural testing ground for large model applications and evolution. Real-world businesses like autonomous delivery and food safety also offer genuine "demand pull" for various computing architectures and chip performances.
On the other hand, under this massive demand, domestic large models and computing power are iterating at an "exponential" pace.

Beyond DeepSeek V4 and Meituan's LongCat-2.0-Preview, leading models like Zhipu's GLM-5, MiniMax M2, Baidu's ERNIE Bot 4.0, and Alibaba's Tongyi Qianwen 3.5 continue to iterate rapidly. On the computing power side, next-generation hardware from Huawei's Ascend and Sugon is deploying at scale, with the full "training-inference-deployment" pipeline migrating to domestic computing power bases entering a substantive phase.
As these two paths converge, the combined momentum of domestic models and computing power is accelerating. On April 13, Stanford University's HAI released the "2026 AI Index Report," noting that the performance gap between Chinese and U.S. AI models is narrowing significantly. However, the U.S. still leads in foundational model innovation, capital investment, and computing infrastructure, boasting 5,427 data centers.
Technological breakthroughs never happen overnight. From CUDA to domestic frameworks, from NVIDIA to domestic computing power, from closed-source monopolies to open-source accessibility, China's AI is reconstructing its foundational capabilities through an extreme operation akin to "changing engines mid-flight."
The simultaneous debut of LongCat-2.0-Preview and DeepSeek V4 marks China's AI ecosystem's stance at a new starting point and sounds the clarion call for a flourishing, autonomous, and controllable new era in China's AI ecosystem.
The growing number of Chinese players entering the "trillion-parameter club" signals a new phase for China's AI ecosystem and serves as a critical footnote to the domestic AI industry's march toward scale and autonomy.