02/12 2026
384
When ChatGPT made its debut, a common question in China was: “How long will it take for China to catch up?”
Just two years later, the landscape has undergone a profound transformation. According to the Ministry of Industry and Information Technology, China has built 42 intelligent computing clusters with over 10,000 cards each, boasting an intelligent computing capacity exceeding 1,590 EFLOPS—ranking among the highest globally. China now leads the world in AI patents, accounting for 60% of the global total.
Behind these milestones, China’s AI industry has not followed a predetermined “catch-up” path but has instead forged a unique development trajectory through the synergistic evolution of computing power and algorithms. The rapid growth of China’s AI sector fully underscores the strategic importance of prioritizing computing power.
Recently, two closely linked industry events have clearly outlined the contours of this path. On February 5, the core node of the National Supercomputing Internet went live in Zhengzhou for trial operation, featuring three Dawn ScaleX (Shuguang ScaleX) 10,000-card supercomputing clusters—forming the nation’s largest single-entity pool of domestically produced AI computing resources. On February 10, the “Domestic 10,000-Card Computing Power Empowering Large Model Development Seminar and Joint Research Initiative Launch Ceremony” was held in Zhengzhou, bringing together industry, academia, research, and application sectors to discuss the collaborative development path of “high computing power + large models.”
The flurry of industry activities within a single week underscores a clear trend: the deep integration and collaborative innovation of domestic computing power, domestic models, and AI applications have become the inevitable route for China’s AI industry. The key to breakthroughs in intelligent computing lies not in isolated technological advances or standalone computing power development but in a systemic approach that harmonizes software and hardware—a consensus now widely shared across the industry.


From Feeding Models to Co-Evolution of Models × Computing Power
The development of the AI industry is a process of continuous integration and collaborative innovation across the technology stack. In the early days of small models, chips, frameworks, and algorithms developed relatively independently, with each link separated and technological progress limited to isolated breakthroughs.
However, with the rise of domestic large models such as DeepSeek, Qwen, and Seedance, and their competition with international top-tier models, the software-hardware synergy in the AI industry has reached new heights.
Unlike the early days of large model development, when “stacking computing power could work miracles,” today, continuous innovation in algorithm architectures, rapid expansion of application scenarios, and explosive growth in inference demand are forcing intelligent computing power to undergo a “dimensional upgrade”—shifting from merely providing computing power to developing systemic engineering capabilities centered around training, inference, scheduling, stability, and cost.
After the “Hundred Models War,” the number of foundational models has gradually converged, but architectural innovation has not ceased; instead, it has become a new dividing line. The China Academy of Information and Communications Technology (CAICT) noted in its “Research Report on the Development of the Artificial Intelligence Industry” that, at the algorithmic architecture level, sparse attention mechanisms, represented by DeepSeek’s NSA and Moonshot AI’s MoBA, are becoming one of the important technical paths for improving model inference efficiency.
CAICT experts point out that the development and innovation of many large models at this stage cannot be separated from deep integration with underlying software and hardware systems. The entire software and hardware ecosystem will be the focal point of model innovation and intelligent computing infrastructure competition in the next phase.
Therefore, the intelligent computing industry is undergoing a core transformation: in the past, it was about “feeding models with computing power,” but now, models and computing power mutually drive each other. Models use smarter architectures to extract higher efficiency per unit of computing power, while computing power employs more mature systemic engineering to transform model innovations into scalable productivity.


Scenarios and Applications Driving China’s Intelligent Computing to Accelerate Its Dimensional Upgrade
CAICT experts point out that the logic of the intelligent computing industry is undergoing a profound transformation. In the past, the development of the intelligent computing industry primarily relied on the scale effect of “stacking computing power,” but today, application-oriented approaches have become the core driving force for accelerating intelligent computing development.
According to the National Data Bureau, at the beginning of 2024, China’s daily Token consumption was 100 billion, but by the end of June 2025, this figure had surpassed 30 trillion, representing growth of over 300 times. This data intuitively reflects the explosive growth of AI application deployment. The massive application demand highlights the core challenges of current intelligent computing power:
“Today, computing power that is useful, easy to use, and affordable is far from sufficient,” said Cao Zhennan, Deputy Director of the National High-Performance Computer Engineering Technology Research Center, summarizing the current state of domestic intelligent computing power.
First, computing power performance and stability. As the AI industry evolves toward trillion-parameter large models, world models, and physical AI, computing power demand is no longer just about “peak performance” but also emphasizes stability, fault tolerance, and engineering reliability under long-duration, large-scale training and high-concurrency inference. Once application scales increase, bottlenecks such as “instability and poor usability” can escalate into accidents.
Second, high barriers to computing power. The accelerated innovation of large models has deepened the coupling between hardware, frameworks, compilers, communication, and operators; meanwhile, the underlying layer features a coexistence of diverse heterogeneous chips, while the upper layer presents a “hundred models, thousand forms” combination of models and applications. Rewriting operators, modifying parallel strategies, adjusting communication and memory, and redoing performance tuning... If migration costs are too high, it essentially discourages users.
Third, the disconnect from industrial applications is the core issue most easily overlooked today. The resource construction of many intelligent computing centers is overly extensive, resulting in some centers having a computing power utilization rate of only 30%. Some intelligent computing centers focus solely on single indicators during construction, neglecting the application demands of diverse heterogeneous ecosystems, leading to a situation where they are “outdated upon completion.”

The Irreplaceable Role of the National Supercomputing Internet as the Engine of Industry Intelligence
From the perspective of software-hardware collaborative innovation, the construction of the National Supercomputing Internet and the landing of its core node in Zhengzhou represent a systemic practice of full-chain collaboration in the intelligent computing industry and a core driving force for promoting industrial intelligence popularization.
1. Global Leading Computing Power Network: Application-Oriented Top-Level Design
The National Supercomputing Internet is positioned as a national-level comprehensive computing power service platform: it uses massive, inclusive, and easy-to-use domestic computing power as its foundation, with full-chain security and stability as its cornerstone, upgrading computing power from a “resource” to a “service” that can be delivered at scale.
Currently, the platform has connected more than 30 supercomputing centers and intelligent computing centers nationwide, forming the world’s leading heterogeneous computing power resource pool. More critically, it does not stop at merely “connecting computing power to the internet” but continuously promotes adaptation and optimization with domestic large models and domestic computing power chips, shifting from “usable” to “easy to use and sustainable.”
The top-level design logic of the National Supercomputing Internet is application-oriented, focusing on real industrial workloads. Take the core node launched for trial operation in Zhengzhou as an example: supported by over 30,000 domestically produced AI computing cards, it possesses full-scenario computing power capabilities and can provide services for high-performance scenarios such as trillion-parameter-level training, high-throughput inference, and AI for Science.
Application scale is the most powerful validation of a computing power platform’s capabilities: the National Supercomputing Internet has accumulated 1.13 million registered users, with a single-day job processing peak surpassing 1.03 million and monthly visits exceeding 11.3 million; the AI community has integrated 1,100+ open-source large models. The core node launched for trial operation in Zhengzhou has also completed deep adaptation for hundreds of applications, covering 23 industry sectors.

2. AI Full-Industry-Chain Aggregator: Bridging the Ecological Closed Loop from Computing Power to Applications
In the past, the fragmented AI industry ecosystem made it difficult for computing power to circulate efficiently, models to adapt quickly, and applications to become widespread. To address this pain point, the supercomputing internet has specifically built an AI industry service capability of “computing power-platform-model-data-application-ecosystem,” promoting full-stack software-hardware technology research and bridging the industrial closed loop of “computing power infrastructure + large model vendors + application scenarios.”
The core of this closed loop lies in deeply integrating computing power resources with large model research and development and application scenarios, driving collaborative innovation across the entire industrial chain. This represents not only a technological breakthrough but also an innovation in industrial development models, symbolizing a transition from fragmentation to collaboration and from isolated innovation to full-chain integration.
3. Accelerating National Computing Power Integration and Promoting Universal Access to Computing Power Across Industries
The significance of the core node in Zhengzhou extends beyond “adding another large computing power center”; as a core node of the “East Data West Calculation” initiative and the National Supercomputing Internet hub, it is accelerating the unified scheduling and stable delivery of national computing power across regions, centers, and architectures. This is precisely the core issue that the national integrated computing power network aims to resolve.
More importantly, the supercomputing internet makes “universal access to computing power” a reality: transforming computing power into a public service capability more akin to water, electricity, and coal, thereby lowering the overall threshold for AI application deployment.

Currently, the invitation-based testing demands for this core node also validate the platform’s “inclusive” value from a side perspective: on the one hand, training and inference demands at the thousand-card and ten-thousand-card levels continue to emerge, reflecting that the industrialization of large models is entering a higher-intensity computing power phase. On the other hand, a large number of industry application demands at the hundred-card level have also been received. The coexistence of computing power demands at different scales precisely validates the necessity of a nationally integrated computing power scheduling system: it must not only support “top-tier sprints” but also cover “long-tail deployments.”
Conclusion
When “models × computing power” enter a phase of co-evolution and application demands force intelligent computing to undergo a dimensional upgrade, overtaking in the AI industry no longer relies on a single breakthrough but on long-term efficiency curves: lower computing power costs, faster iteration speeds, more stable engineering delivery, and broader industry penetration.
China’s differentiated path in AI will ultimately be defined by this systemic capability and will be validated in the production sites of thousands of industries.
END