Sino-US AI Computing Power: Mid-Game Showdown - Openness vs. Closure

12/05 2025 351

Recently, Google's TPU has ridden the wave of Gemini3's resurgence, significantly expanding its incremental prospects. Meta is contemplating investing billions of dollars to support it, with agencies boosting TPU production forecasts by 67%, reaching 5 million units. Leveraging a full-chain closed loop of 'chip-optical switching network-large model-cloud services,' Google's intelligent computing system has regained prominence in the AI race, furthering the American trend towards closed monopolies.

Meanwhile, open-source models like DeepSeek are hot on its heels. Earlier this month, DeepSeek V3.2 and its enhanced long-thinking model were unveiled. The former matches ChatGPT's performance in tests, while the latter directly competes with top closed-source models like Gemini. This signals the growing momentum of China's open-source approach, with the domestic intelligent computing system showcasing strong potential for ecological collaboration at the application layer.

Thus, the Sino-US AI industry competition has reached a pivotal stage, with the 'open collaboration' versus 'closed monopoly' dynamic becoming increasingly pronounced. Especially in the intelligent computing ecosystem layout, the two major camps may be gearing up for a climactic showdown of systematic capabilities.

01 From Gemini 3 to TPU v7: The Apex of Software-Hardware Integration Closed Loop

Undoubtedly, Google TPU's sudden popularity owes much to Gemini3's model capability verification. As an ASIC chip tailored for Google's TensorFlow framework, TPU establishes its full-stack closed loop through software-hardware integration design, capturing the external user market amid upper-layer application breakthroughs, and once being hailed as the strongest alternative to NVIDIA GPUs.

The term 'software-hardware integration' implies that hardware design fully caters to the needs of upper-layer software and algorithms. For instance, Gemini 3's training and inference processes are highly compatible with TPU clusters, showcasing extreme power consumption efficiency—TPU v5e consumes only 20%-30% of NVIDIA H100's power, while TPU v7 doubles performance per watt compared to its predecessor.

Currently, Google has forged a closed and efficient loop through vertical integration of 'chips + models + frameworks + cloud services.' On one hand, it significantly boosts its own AI R&D and application development efficiency; on the other, it carves out its niche under the NV mainstream system, gaining dominance in another intelligent computing track. Meta's intention to acquire TPU has further fueled this system's popularity.

Some industry insiders note that from Apple to Google, the American vertical closed approach has nearly reached its zenith, reflecting tech giants' pervasive monopoly desires at the industrial chain level to consolidate and expand their interest territories. However, from an ecological development standpoint, the closed model lacks long-term vision, easily leading to a loss of innovation vitality upstream and downstream of the industry and forming a highly centralized pattern with a single dominant entity.

Moreover, from TPU's application scenarios perspective, the software-hardware integration closed loop is clearly a game for giants. Analysts say Google's clustered design and 'software black box' necessitate users to reconfigure a whole set of heterogeneous infrastructure. Without the need to train trillion-parameter models, TPU's pulsating array cannot be fully utilized, and the saved electricity costs may not offset the migration expenses.

Additionally, due to TPU's extremely closed technology route, incompatible with mainstream development environments, users also require a professional engineering team to handle its XLA compiler and reconstruct the underlying code. In essence, only enterprises on the scale of Google and Meta are qualified to switch to the TPU route, and only when the computing power scale reaches a certain level can the energy efficiency advantages of customized products be fully harnessed.

It's undeniable that leading enterprises like Google have achieved rapid single-point breakthroughs in local tracks through vertical integration and self-built closed loops, while also fostering a thriving landscape of American tech giants. However, in the Sino-US AI competition context, the American closed monopoly approach has preemptively positioned itself on the track with its first-mover advantage, making passive follow-up catch-up increasingly inadequate to meet China's intelligent computing industry development needs.

Beyond the 'small yard with high fences,' how to fully leverage the national system's advantages and unite all forces to break down barriers and build pathways has become crucial to narrowing the Sino-US AI systems gap.

02 Multivariate Heterogeneous Ecological Collaboration: The Open Path to the Next Race Point

Compared to the American oligopoly model, China's intelligent computing industry is reshaping an open ecosystem based on a multivariate heterogeneous system. From top-level design to industrial implementation, 'open source, openness + collaborative innovation' has become a consensus across the entire domestic software and hardware stack.

At the policy level, the Action Plan for High-Quality Development of Computing Power Infrastructure proposes building a reasonably laid out, ubiquitously connected, flexible, and efficient computing power internet, enhancing heterogeneous computing power and network integration capability, and achieving cross-domain scheduling and orchestration of multivariate heterogeneous computing power. Moreover, relevant departments have repeatedly emphasized encouraging all parties to innovate and explore construction and operation models for intelligent computing centers and multi-party collaborative cooperation mechanisms.

Extending to the AI application layer, the Opinions on Deeply Implementing the 'AI+' Initiative also mandates deepening high-level openness in the artificial intelligence field and promoting accessible technology open sourcing... It's evident that the country has provided a distinct Chinese solution in artificial intelligence and intelligent computing—not blindly pursuing closure in a closed route but seeking to catch up and surpass through differentiation in an open pattern.

In reality, the top-level design is entirely based on the industry's practical needs. Under US technological blockade, China's intelligent computing industry primarily faces two challenges: bottlenecks in single-card computing power performance and high computing power costs. Besides continuous efforts in core technology areas like chips, models, and basic software, the current more effective approach is to develop larger-scale, more diverse, and efficient intelligent computing clusters to break through AI computing power bottlenecks.

Industry survey results indicate that there are no fewer than 100 computing power clusters with a scale of thousands of cards announced domestically, but most are heterogeneous chips. It's conceivable that if different hardware systems are mutually closed, with non-unified standard interfaces and incompatible software stacks, achieving effective integration and utilization of intelligent computing resources will be difficult, let alone meeting large-scale parameter model application needs.

According to mainstream industry views, domestic AI computing power exhibits diversified and fragmented characteristics while also possessing considerable scale advantages. The immediate task is not to blindly advance a single technology route but to first break through the 'technical walls' and 'ecological walls' as soon as possible, achieve open cross-layer collaboration in the industrial chain, truly unleash the overall computing power ecological potential, and move from single-point breakthroughs to integrated innovation.

Specifically, the open route aims to promote collaborative innovation in the industrial ecosystem based on an open computing architecture. For example, by formulating unified interface specifications, it can drive upstream and downstream enterprises in the industrial chain, such as chips, computing systems, and large models, to jointly participate in ecological construction, reduce repetitive R&D and adaptation investments, and share the benefits of technological breakthroughs and collaborative innovation.

Meanwhile, as collaboration standards in the open architecture tend to unify, it can further create commercialized software and hardware technologies to replace customized and proprietary systems, thereby reducing computing product application costs and achieving computing power inclusion across the entire industrial stack.

Obviously, under the Chinese open system, domestic AI computing power is breaking through the generalization and popularization dilemmas of Google TPU, widely linking the intelligent computing ecosystem with various developers and users, ultimately forming systematic collaborative combat effectiveness, and more flexibly and efficiently empowering the landing of artificial intelligence +. At that juncture, the Sino-US AI competition will also transcend single-card competition and single-model comparisons, fully ushering in an ultimate showdown of ecological system capabilities.

The copyright of the cover image and accompanying pictures of the article belongs to the respective copyright owners. If the copyright holders believe that their works are not suitable for public browsing or should not be used free of charge, please contact us promptly, and our platform will make immediate corrections.

Solemnly declare: the copyright of this article belongs to the original author. The reprinted article is only for the purpose of spreading more information. If the author's information is marked incorrectly, please contact us immediately to modify or delete it. Thank you.