“Compute Power Operators” in the Era of Large Models: How Can WWXQ Craft an Engaging AI Narrative?

05/20 2025 550

[Abstract] The advent of groundbreaking applications like ChatGPT and DeepSeek marks the dawn of the AI 2.0 era. Compute power, the backbone of these innovations, has emerged as a pivotal arena for competition.

WWXQ, a startup founded less than two years ago, leverages a team with ties to Tsinghua University and billions in funding to establish itself as a "compute power operator in the era of large models."

However, amidst fierce competition from established AI enterprises founded by Tsinghua alumni, the deep expertise of international giants like NVIDIA in software-hardware co-optimization, and the potential rivalry from cloud service providers, WWXQ faces significant challenges.

Navigating this landscape of giants, crafting a robust large model ecosystem in the AI era is not just a priority for WWXQ but also a pressing question for China's hard tech startup ecosystem.

Below is the main text:

Tsinghua-Affiliated AI Unicorn

In May 2023, Professor Wang Yu from Tsinghua University's Department of Electronic Engineering and doctoral student Xia Lixue co-founded WWXQ.

Yan Shengen, the company's co-founder and CTO, is also a Tsinghua alumnus and former Executive Research Director of SenseTime's Data and Computing Platform Department, where he led a team to build a cluster of over 10,000 GPUs. Dai Guohao, the Chief Scientist, holds a Ph.D. from Tsinghua's Department of Electronics.

Backed by Tsinghua's NICS-EFC laboratory and a team of Tsinghua graduates, WWXQ has garnered significant recognition since its inception.

Within a year and a half of its establishment, the company has secured nearly RMB 1 billion in cumulative funding, ranking among the top domestic players in its sector.

Renowned investors such as Qiming Venture Partners, Legend Capital, Xiaomi, Shanghai AI Industry Investment Fund, Shunwei Capital, Shenwan Hongyuan, and ZhenFund have all invested in WWXQ.

In recent years, AI has undeniably become the hottest funding trend.

Among the "Big Model Six Tigers," Dark Side of the Moon, Baichuan Intelligence, Minimax, and ZeroOne have announced multiple funding rounds, all entering the unicorn club.

In September 2024, Luchen Technology secured hundreds of millions of RMB in Series A++ funding from investors including Beijing AI Industry Investment Fund, Stone Creek Capital, Capstone Capital, and Lingfeng Capital.

Notably, from the Shanghai AI Industry Investment Fund's participation in WWXQ's funding to the Beijing AI Industry Investment Fund's investment in Luchen Technology, state-owned assets have ventured into the AI infrastructure field.

As ChatGPT ushers in the AI 2.0 era, large models offer the potential to generalize AI infrastructure applications, and DeepSeek's surge has spurred a new wave of growth in China's large model market.

Currently, cities like Beijing, Shanghai, Shenzhen, and Chengdu have issued support policies for AI large models. Against this backdrop, China's AI industry is entering a period of breakthrough.

Integrating Chips and Compute Power

At the methodological level, WWXQ doesn't solely focus on software to enhance chip efficiency but adopts a software-hardware integration approach.

Positioning itself as a "compute power operator in the era of large models," the company addresses the issue of compute fragmentation in AI 2.0 through software-hardware synergy optimization technology.

This strategic choice is deeply rooted in the founder team's insight into industry pain points.

Currently, domestic model and chip ends are thriving, but a persistent ecological gap exists between heterogeneous chips, leading to tasks that can only run on specific chips. Heterogeneous chips often require customization to meet personalized needs.

However, the costs of large model training and inference have soared, making it difficult for developers to achieve effective compute power migration. This turns the technical advantages of multi-chip platforms into commercialization pitfalls.

For instance, during DeepSeek's early popularity, its website was often criticized for being "busy," exposing the fundamental issue of "disconnect between algorithm and hardware rhythm."

Meanwhile, most domestic models rely on foreign chips like NVIDIA for training, making it challenging to form an effective closed loop with domestic systems and chips.

Amidst surging application demand, enhancing compute power for inference and training, and connecting domestic independent software and hardware ecosystems have become primary concerns for mainstream vendors.

To achieve effective compute power and chip integration, the middle layer must exert its influence, transforming multi-chips into high compute power.

Addressing this pain point, WWXQ utilizes the Infini-AI heterogeneous cloud platform to help developers easily access the DeepSeek series of models and diverse heterogeneous domestic compute power services.

In the digital realm, heterogeneous chips on the Infini-AI platform are used not only for large model inference but also for training, creating a higher value barrier than focusing solely on chips or large models.

This platform aims to break the dilemma of domestic AI having "chips but no ecosystem" and form a closed-loop ecosystem with domestic AI systems and chips.

An Increasingly Competitive Landscape

Recently, MIT Technology Review's report titled "4 Chinese AI Startups to Watch Besides DeepSeek" garnered widespread market attention.

In this article, Jieyue Xingchen, Mianbi Intelligence, Zhipu, and WWXQ captured the attention of relevant practitioners in the United States.

Notably, except for Jieyue Xingchen, founded by former Microsoft Senior Vice President Jiang Daxin, the other three companies all have ties to Tsinghua University.

Mianbi Intelligence has launched the MiniCPM series of models designed for real-time processing on terminals like smartphones, PCs, automotive systems, smart home devices, and even robots.

Often referred to as a "small cannon" in China, the latest MiniCPM-o2.6 model, despite being only 8B in size, performs comparably to GPT-4 in various benchmarks.

As one of the earliest domestic enterprises to develop pre-training for AI large models, Zhipu co-developed the GLM-130B, a Chinese-English bilingual pre-trained model with 100 billion parameters. Based on this, they launched the dialogue model ChatGLM and the open-source single-GPU version model ChatGLM-6B.

The team has also created an AIGC model and product matrix, including the AI efficiency enhancement assistant Zhipu Qingyan, the high-efficiency code model CodeGeeX, the multimodal understanding model CogVLM, and the text-to-image model CogView.

The products of these alumni companies are all new competitors for WWXQ.

Furthermore, NVIDIA excels in the software-hardware co-optimization route, leveraging its powerful CUDA to build a moat and accelerate mass production on the hardware side.

Currently, NVIDIA's software-hardware collaboration department has the influence to place demands on both the CUDA team and the chip hardware team.

From a business model perspective, WWXQ's operations share similarities with cloud vendors.

This means that in the long run, there may be business overlap or competition with cloud vendors, and WWXQ may even have to compete with major players like Alibaba, Tencent, ByteDance, and Huawei.

Therefore, amidst layered competition from Tsinghua-affiliated alumni, NVIDIA, and other cloud vendors, WWXQ faces considerable pressure in the second half of its journey.

Epilogue

In an era where compute power has become a national strategic resource, WWXQ's "software-hardware synergy" sets a benchmark for domestic vendors.

Navigating the landscape of giants, crafting a robust large model ecosystem in the AI era is not just a priority for WWXQ but also a critical question for China's hard tech startup ecosystem.

- XINLIU -

Solemnly declare: the copyright of this article belongs to the original author. The reprinted article is only for the purpose of spreading more information. If the author's information is marked incorrectly, please contact us immediately to modify or delete it. Thank you.