01/30 2026
491
Today (January 29, 2026) is a milestone day for China's AI industry and even its chip sector. Pingtouge (the chip business arm of Alibaba Group) has officially unveiled a high-end AI chip, the "Zhenwu 810E," on its website. Prior to this, rumors about Pingtouge's self-developed PPU (Processor Performance Unit) had been widely circulating in the market, even making an appearance on Xinwen Lianbo (China's flagship evening news program). The few technical details that surfaced sparked intense discussions within China's chip community. Now, Pingtouge's self-developed PPU, the "Zhenwu," has finally stepped into the public spotlight.
Please note: Although this marks the first public reveal of the "Zhenwu," it has already been deployed extensively in real-world applications. To date, it has enabled multiple large-scale cluster deployments, with tens of thousands of cards on Alibaba Cloud, serving over 400 clients across diverse industries. These include State Grid, the Chinese Academy of Sciences, XPeng Motors, Sina Weibo, and more. This is not a chip confined to PowerPoint presentations or stuck in early trial stages. Instead, it has amassed a substantial number of client cases and formed a certain ecosystem. Over half a year ago, a friend at Alibaba Cloud remarked, "I think Pingtouge's self-developed chips are on par with mainstream solutions in the market." I was skeptical at the time, but now I can almost grasp his confidence.
How advanced is the "Zhenwu"? To answer this, I consulted the Gemini large model to summarize reports from mainstream English media. The general conclusions are as follows:
From my perspective, for AI inference tasks, the "Zhenwu" is clearly capable and offers strong cost-effectiveness within a certain range. Regarding training tasks, the currently available information is limited, but it should only be a matter of time before more details emerge. If, in the future, new versions of the Qianwen large model are partially or even fully trained using PPUs, it would not come as a surprise. Of course, this process will be complex and cannot be rushed.

Amidst the long-term "computing power blockade" imposed by the United States and the domestic shortage of computing resources, Pingtouge's launch of its self-developed PPU is not merely a technical announcement but also carries significant strategic weight. It means that China's AI industry now has more options and greater flexibility in choosing computing power, with a higher proportion of self-developed products. Since its official establishment in 2018, Pingtouge has persevered for eight years, and the "Zhenwu" represents the culmination of its relentless pursuit of independent research and development. This path has not been easy, but once the momentum builds, the future looks promising.
However, today I want to delve deeper than just chips. After all, AI represents a vast industrial chain, where chips are an important but not the sole component. From upstream computing hardware (centered around chips) to midstream cloud services and downstream foundational large models, a company must have a presence in at least these three areas to be considered as having a comprehensive industrial chain layout. Globally, which technology companies can claim strong technical capabilities across all three areas? After careful consideration, I find only two: Google and Alibaba.
Let's start with Google: Its self-developed chip, the TPU, is widely recognized for its strength and has already been shipped to third parties on a large scale. Google Cloud is experiencing rapid growth, with most of its new revenue stemming from AI. Its foundational large model, Gemini, has made significant strides in the past year, showing strong momentum to surpass GPT. Google's "full-stack technical strength" is unparalleled in Silicon Valley, and there is a good synergy among the three areas: TPUs reduce the training and inference costs of Gemini, while Gemini and Google Cloud together form an "AI + Cloud" ecosystem. The success of Google Cloud and Gemini, in turn, serves as the best advertisement for TPUs. As a result, Google emerged as the best-performing Silicon Valley tech giant in terms of stock price in 2025, with its market value surpassing $4 trillion.
In contrast, the other two cloud computing giants, Amazon and Microsoft, have incomplete layouts. Amazon's self-developed chip, Trainium, has some technical prowess but lags far behind Google's TPU and has not yet established an external customer base. Its self-developed foundational large model is also of average quality, relying mainly on investments in external developers like Anthropic. Microsoft, on the other hand, lags even further in self-developed chips and relies heavily on its investment in OpenAI for foundational large models. The AI strategies of these two giants focus on the cloud platform, with limited resources invested in the other two areas, relying on investments and partnerships for layout. This is a "smart" and "effort-saving" approach but lacks full control over the industrial chain.
Alibaba, however, has chosen a more challenging path, akin to Google's: full-stack self-research, striving for top-tier independent development in AI chips, cloud computing, and foundational large models. Tongyi Lab for large models, Alibaba Cloud for cloud platforms, and Pingtouge for AI chips together form the so-called "Tongyun Ge" AI triangle, or the "Trinity" of AI full-stack technical capabilities. This model demands significant investments, long research and development cycles, and concentrated risks, but once successful, the rewards are substantial and can build formidable competitive barriers. In my view, Google's AI empire transformed from being in turmoil in 2023 to thriving in 2025, with its full-stack self-research capabilities playing a decisive role.

In the cloud computing segment, Alibaba Cloud's technical prowess is already beyond doubt. It ranks among the global top four public cloud platforms, alongside Microsoft Azure, Amazon AWS, and Google GCP, with infrastructure spanning 29 regions worldwide and serving over 5 million clients. Alibaba Cloud's "Apsara" is the only self-developed cloud computing operating system in China. At the 2025 Yunqi Conference, Wu Yongming proposed that Alibaba aims to become a "super AI cloud" and one of the only 5-6 "super cloud computing platforms" globally. From a research and technological infrastructure perspective, this statement is not an overstatement and has strong feasibility.
In the foundational large model segment, Tongyi Lab's capabilities are also well-established. Personally, the two domestic large models I use the most are Qianwen and DeepSeek. Research by Frost & Sullivan indicates that in the first half of 2025, Qianwen ranked first in terms of market share for enterprise-level large model calls in China. Overseas, Qianwen's open-source version is particularly popular, being one of the most discussed open-source large model families on platforms like Hugging Face (perhaps the "one of" can be omitted for emphasis). Last year, Singapore's National AI Plan (AISG) abandoned Meta's LLaMA series large models and adopted Qianwen's open-source architecture. From this perspective, Qianwen is more comprehensive than Gemini and GPT, as the latter two primarily focus on closed-source models.
As for the chip segment, before the announcement of the "Zhenwu," the market had limited insight into Alibaba's true capabilities. Many friends in the AI circle told me, "Pingtouge is among the top tier of domestic AI chip manufacturers and may even be one of the most advanced." However, until today, everyone had only a vague understanding of the PPU's technical parameters and application scope. Now that official information is available, I believe more comprehensive disclosures will follow. From any perspective, Pingtouge's leading position in domestic chips should now be beyond doubt.
2025 was a breakthrough year for Google's TPU, with Gemini's success prompting more and more external clients to order TPUs. In the coming years, Google is likely to surpass AMD and become the world's second-largest GPU manufacturer. The same could very well happen to Pingtouge: the more advanced the Qianwen large model becomes and the higher its "PPU ratio," the better "Zhenwu" and subsequent PPU products will sell. If external clients show interest, they can first rent PPU computing power through Alibaba Cloud before deciding whether to purchase PPUs. In the long run, external demand for PPUs will likely follow a "two-legged" approach, with significant volumes delivered through Alibaba Cloud's computing power and direct client purchases, driving down the marginal cost per token and forming a positive ecological cycle for "Tongyun Ge."
The golden triangle of "Tongyun Ge" has taken shape. From now on, when analyzing Alibaba's AI strategy, the outside world should not approach it in isolation but rather as an organic whole. All major companies talk about "AI ecosystems," but to date, only Alibaba in China has truly built a comprehensive and self-developed AI ecosystem. Humanity has just entered the AI era, and the long march toward AGI (Artificial General Intelligence) has only just begun. Over the next five, ten, or even longer years, what contributions and impacts will Alibaba's AI ecosystem bring to the global AI industry? No one knows the answer, but I am certain that they will be immense and far-reaching.