09/02 2025
574
With the advent of groundbreaking large models like GPT-5 and DeepSeek-V3.1, enterprises that embarked on large-scale AI deployments earlier this year are now grappling with a pressing question: Why aren't the significant investments yielding tangible returns?
According to Gulfstream Economic Review, one critical bottleneck is computing power. There exists a void between corporate investments and AI implementation in terms of an intelligent computing solution. This solution must not only facilitate a synergistic effect from multiple GPUs (1+1>2) but also enable enterprises to leverage large models cost-effectively for application development.
01
AI Implementation: An Unprecedented Challenge
Two recent documents warrant special attention:
The first is MIT's report titled "Generative AI Gap: The State of AI in Business by 2025," which highlights that 95% of surveyed enterprises have failed to realize any substantial ROI from their generative AI investments, with only 5% being exceptions.
On one hand, most enterprises profess their commitment to AI adoption; on the other hand, AI applications remain difficult to implement, creating a stark and unusual contrast.
The second document, of even greater significance, is the "Opinions on Further Implementing the 'AI+' Action" issued by the State Council (hereinafter referred to as the "Opinions"). It outlines the acceleration of six key actions: "AI+" in science and technology, industrial development, consumption quality improvement, people's livelihood and well-being, governance capabilities, and global cooperation. It also emphasizes strengthening eight foundational support capabilities, including model foundational capabilities and intelligent computing power coordination, with the goal of fostering new-quality productivity and sharing the benefits of AI development.
The "Opinions" are clear: the six key actions focus on "where to implement," while the eight foundational support capabilities emphasize "how to implement," almost akin to hands-on guidance.
Combining these documents and industry consensus, it becomes evident that to implement AI, two key computing power-related issues must be addressed:
First, "affordability": leveraging new technologies to reduce the unit cost of computing power, making intelligent computing accessible to enterprises.
Second, "effective utilization": decoupling industry applications from different intelligent computing frameworks to achieve "once developed, universally applicable," enabling enterprises to efficiently utilize computing power.
Remarkably, a leading Chinese technology enterprise has not only successfully tackled these two issues but has also tangibly reflected the outcomes in its financial reports:
In the first half of this year, ZTE's operating revenue reached 71.55 billion yuan, a year-on-year increase of 14.5%, with a net profit attributable to shareholders of 5.06 billion yuan. The revenue from the second curve, comprising computing power and terminal products, surged nearly 100% year-on-year, accounting for over 35% of the total. Notably, the company's server and storage revenue increased by over 200% year-on-year, underscoring the efficacy of its recent "connectivity + computing power" strategy. In response, institutions like Kaiyuan Securities have assigned ZTE a "buy" rating.
02
Bridging the AI Gap: Leveraging Ten Thousand Card Clusters for Industrial Transformation
So, how exactly has ZTE achieved this?
First, at the hardware level. With the surging demand for computing power and large model parameters scaling from tens of billions to trillions, traditional computing power clusters struggle to meet the training demands of ten thousand or even a hundred thousand GPU cards due to issues like low interconnect efficiency and insufficient scalability between GPU cards. In essence, owning 10,000 NVIDIA cards does not equate to possessing supercomputing power. Each card is akin to an individual in a synchronized performance; if they are not in sync, do not follow organizational arrangements (insufficient scalability), or have poor intercom signals (low interconnect efficiency), the desired results cannot be achieved.
Addressing this industry pain point, ZTE emphasizes the importance of "computing power networking." With its self-developed "Lingyun" AI high-capacity switching chip at its core, ZTE introduces a "high-speed interconnect open architecture for AI accelerators." This architecture facilitates large-scale and efficient interconnect of GPUs through chip-level optimization, resolving issues like high latency and bandwidth bottlenecks in multi-card collaboration, thereby significantly enhancing the computing performance and efficiency of intelligent computing clusters. The "Lingyun" chip reportedly supports high-speed data transmission of tens of terabytes per second, providing a robust computing power foundation for training sovereign large models with over trillion parameters.
Comparing computing power networking to home networking might help in understanding. To cover an entire house with gigabit broadband signals (supercomputing power), it's not just about accessing gigabit broadband (purchasing GPU chips) but also having compatible routers ("Lingyun" AI high-capacity switching chips) and Mesh networking solutions.
The significance of this technology is profound. Given the current international context, the "Lingyun" AI high-capacity switching chip precisely addresses the most pressing "interconnect" bottleneck as AI computing scales up. It not only meets the technical performance standards required to support the training of large models with trillions of parameters and improves computing efficiency but also represents a crucial step in China's quest for autonomy and technological breakthroughs amidst fierce AI competition.
The industry has bestowed the highest praise. In July 2025, the "Distributed OCS All-Optical Interconnect Chip and Super Node Application Innovation Solution" jointly developed by ZTE and its partners won the 2025 World Artificial Intelligence Conference SAIL Award (dubbed the "Oscar of AI"). A month later, the "Intelligent Computing Super Node System Based on GPU Card High-Speed Interconnect Open Architecture and Self-Developed 'Lingyun' AI Switching Chip" received the "Annual Major Breakthrough Achievement Award" at the 2025 China Computing Power Conference.
Next, at the application level, ZTE adopts a "1+N+X" strategy. "1" represents the foundational large model, the Nebula Large Model. Based on this, "N" domain-specific large models are pre-trained with incremental domain knowledge, such as research and development large models, industrial large models, communication large models, and government large models. Then, "X" applications are derived from these domain-specific large models, constructing a new engine for industrial digital and intelligent transformation.
For instance, the industry's first large model-based "Intelligent Defense" anti-fraud system application launched last year is a testament to the "1+N+X" strategy. It accurately identifies spam messages with various mutations and interferences by integrating contextual semantic association information, boosting interception accuracy to 99%. As fraudsters' tactics become more sophisticated, the "Intelligent Defense" system stays one step ahead.
In urban construction and other fields, ZTE and its partners have pioneered the use of large models for visual intelligent recognition of various risks like gas leaks, waterlogging, and road hazards, automatically generating emergency response plans, truly "guarding human life and health with technology."
Therefore, it's no exaggeration to say that from computing power to the application level, ZTE has genuinely assisted enterprises in bridging the last mile in AI implementation, scaling from one to many.
03
Building a Moat: Emerging as a Leader in Intelligent Computing Power
Since 2023, ZTE has shifted its focus from connectivity to "connectivity + computing power," continuously increasing investment in full-stack areas encompassing computing power, networks, platforms, and applications:
At the computing power layer, in addition to the "Lingyun" chip, ZTE has also developed high-performance AI servers, intelligent computing all-in-one machines, and other products;
At the network layer, it has constructed a low-latency, high-bandwidth intelligent computing network to support efficient transmission of computing power;
At the platform layer, it has launched an AI development platform, providing developers with tools for model training, inference deployment, etc.;
At the application layer, it has developed tailored AI solutions for industries such as finance, healthcare, and industry...
In earlier years, as a provider of comprehensive communication solutions, ZTE's goal was network inclusivity. Nowadays, ZTE has prioritized "computing power inclusivity," aiming to make intelligent computing power as ubiquitous as water and electricity, fostering the widespread application of AI technology across all sectors and benefiting millions of users.
To consolidate its advantages, ZTE timely proposed to become a "leader in network connectivity and intelligent computing power" from three dimensions: professionalism, collaboration, and agility:
First, professionalism. Adhering to the philosophy of "leaving complexity to ourselves and simplicity to others," ZTE deepens its traditional core advantages in the ICT field while accelerating innovative breakthroughs in cutting-edge AI technologies, enabling users to deploy solutions out-of-the-box with a single click;
Second, collaboration. ZTE doesn't work in isolation but leverages its resources and capabilities to bring talents from all walks of life together, developing products that swiftly meet niche market needs;
Third, agility. In the era of artificial intelligence, where demands evolve rapidly and information is in constant flux, the ability to respond swiftly and adapt flexibly to tailor products for users is crucial for maintaining a competitive edge.
"All in AI, AI for All" is ZTE's strategic proposition. It's remarkable that this Chinese technology enterprise, which resolutely took up the mantle of breaking through technology blockades a few years ago and reinvigorated its "connectivity" expertise, has now successfully developed its "second curve."
ZTE may already have the answers to how to implement AI and develop "AI+."
Reference Materials:
C114 Communications Network: Behind ZTE's Third-Quarter Report: Multi-Dimensional Layout in Computing Power Business
Shanghai Securities News: Xu Ziyang of ZTE: Taking "Connectivity + Computing Power" as the Long-Term Strategic Main Channel
Lanjing News: ZTE: Laying the Foundation for AI with Computing Power, Opening a New Era of "Connectivity + Computing Power"
Electronic Component Technology: ZTE's "Lingyun" Chip Drives Intelligent Computing Breakthroughs: Ten Thousand Card-Level Super Nodes Support Trillion-Parameter Large Model Training
ZTE: Accelerating the Prosperity of the Intelligent Computing Ecosystem