The 210 Billion Token “Ranking Manipulation”: When AI Consumption Replaces Output, Global Tech Giants Get Stuck in a Consumption Quagmire

03/26 2026 559

As Silicon Valley’s tech industry shifts its focus from “valuing output” to “competing on consumption,” a bizarre “money-burning” race is sweeping through the global tech sector. In a single week, OpenAI employees consumed 210 billion tokens, equivalent to the total text volume of over 30 Wikipedias. Meta and Shopify evaluate employees based on computing power consumption, and Chinese tech giants like Alibaba, Tencent, and ByteDance are following suit by incorporating token consumption into their core performance metrics.

This cross-border consumption race has transformed AI computing power from a “cost tool” into a “symbol of value,” driven by a complex interplay of capital games, technological change, and workplace dynamics. As “feeding AI” becomes a workplace norm and investors’ money pours into “ineffective consumption,” the AI industry stands at a crossroads between bubble and breakthrough.

Introduction: The Absurd “Computing Power Struggle” is Reshaping the Workplace

In Silicon Valley’s tech giants, a radical redefinition of “diligence” is taking hold. An employee who fails to consume enough AI computing power by the end of the day risks being marginalized in performance reviews. On Meta’s internal leaderboard, token consumption has replaced code quality as the core metric for measuring an employee’s “future-readiness.” In China, tech teams at Alibaba, Tencent, and ByteDance have been given clear token consumption KPIs, with “under-consumption” even interpreted as “failure to fully utilize AI tools.”

This is not merely a technical application but an out-of-control “computing power arms race.” New York Times columnist Kevin Roose captured the absurdity with a sharp metaphor: “It’s like evaluating NBA mascots based on how many Hermès T-shirts they launch.” When expensive AI computing power becomes a “status symbol” and consumption itself matters more than output, the fundamental logic of the AI industry is being rewritten.

Global Tech Giants in Sync: From Silicon Valley Leaderboards to Chinese KPIs, Consumption-Based Evaluation Spreads

Silicon Valley’s consumption race has evolved from isolated corporate “trial and error” into an industry consensus. Meta, OpenAI, Shopify, and other leading firms have launched internal token consumption leaderboards, with managers publicly rewarding “heavy computing power users” while pressuring and reprimanding those with low consumption. Under this incentive system, employees no longer seek to solve problems at the lowest computing cost but instead fall into a cycle of “meaningless consumption,” generating redundant code and repetitive analyses just to meet consumption targets.

This trend quickly spread to China, where consumption-based evaluations at domestic tech giants have intensified. Alibaba’s ATH business group led the shift by fully transitioning AI business core metrics from DAU (Daily Active Users) to token consumption, establishing a dedicated team to oversee computing power allocation and drive “proactive consumption.” Some Tencent R&D teams explicitly require engineers to consume a specified daily token quota, with performance ratings restricted for non-compliance. ByteDance has mandated AI tool usage across content creation and product development teams, making token consumption a core performance dimension.

The evaluation logic of tech giants in China and abroad is strikingly consistent: higher consumption signifies greater AI commitment and potential. Yet few ask: How much of this consumption translates into actual output? What business value does it deliver?

The Truth Behind 210 Billion Tokens: Staggering Consumption and Cost Disconnects

The 210 billion token weekly consumption figure is the most vivid symbol of this race. What does this number mean? It equals the combined text volume of over 30 Wikipedias, enough to fill tens of thousands of large libraries. Ironically, for one overseas software engineer, his company’s token costs for running AI coding tools exceeded his personal salary.

This “tools more expensive than people” phenomenon is not unique. Token consumption in China has also exploded, soaring from roughly 100 billion daily tokens in early 2024 to over 30 trillion by June 2025—a more than 300-fold increase in 18 months. Leading firms boast astonishing processing capabilities: Baidu’s ERNIE model handled approximately 1 trillion tokens daily in 2024. For some internet firms, monthly token expenses now approach or even exceed team labor costs, completely upending the traditional tech industry cost logic that treated human capital as the core asset.

These frenzied token consumptions are essentially digital byproducts of electricity and computing power. On the supply side, OpenAI relies on Microsoft Azure’s supercomputing clusters, while domestic firms leverage China’s “East Data West Computing” infrastructure, building computing bases in Gansu, Ningxia, and other regions with low-cost green energy. Key upstream suppliers include hardware firms like NVIDIA and Inspur Information, along with third-party computing providers such as CoreWeave and Capital Online.

Despite Chinese models’ significant price advantage—U.S. mainstream AI models charged about 72 RMB per million tokens in early 2026, while Chinese models clustered around 10–20 RMB, a nearly sevenfold difference—tech giants’ AI computing expenditures remain astronomical. Over 70% of R&D investment at domestic firms like Zhipu AI goes to computing power, while OpenAI’s annual inference costs surpassed $10 billion, with the vast majority funded by capital markets.

Capital’s Secret: Using Investors’ Money to Fuel Industry Valuation “Vanity”

Why do tech giants tolerate or even encourage meaningless token consumption? The answer lies in capital markets’ ruthless logic.

For firms like OpenAI and Meta, daily processing volumes of 5 trillion tokens are not technical burdens but “computing power moats” to showcase to investors. In capital’s eyes, massive token numbers serve as hard currency, signaling the most powerful models, broadest application scenarios, and power to define industry rules. This “vanity display” aims to inflate valuations and attract more financing.

Chinese firms follow the same logic. Alibaba, Tencent, and others frequently release trillion-level token consumption data to prove their AI leadership and secure market confidence. Some startups use “high consumption” to establish industry presence and unlock financing opportunities.

Critically, most of this consumption is funded by investors’ money. OpenAI has raised over $10 billion in financing, yet still reported a $4.3 billion loss in H1 2025, with all computing costs for free users covered by funding. Domestic startups like Zhipu AI posted a 621 million RMB adjusted net loss in H1 2025, dragged down primarily by computing costs. Even for Alibaba and Tencent, AI computing investments exceed 15% of annual revenue, essentially using corporate funds (including financing) to drive consumption and inflate AI business valuations.

This “burn money for scale” model is pushing the industry into a vicious cycle of “bigger scale, deeper losses.” Employees become “AI feeders,” firms become “capital vanity tools,” and AI’s core mission of boosting productivity is replaced by consumption competition.

The Path Forward: From “Consumption-First” to “Value-Oriented,” the Industry Must Return to Rationality

This escalating token consumption race is ultimately a phase of capital-driven chaos. As cost pressures mount, the industry is gradually emerging from the fog and returning to rationality.

Some Chinese firms have already led the correction. Alibaba is shifting evaluation criteria from pure token consumption to business value and ROI. Some tech giants are strictly controlling computing costs through caching technologies and architectural optimizations to reduce per-token consumption. More teams are establishing “consumption-output” assessment mechanisms to reject meaningless computing waste.

Globally, price wars are forcing industry cooling. OpenAI slashed o3 model prices by 80% in 2025, while domestic firms cut prices to capture market share, driving token unit prices downward and significantly reducing enterprise consumption costs. Meanwhile, technological advancements like agent-based AI and Vibe programming are improving computing efficiency and reducing waste.

AI’s core value has never lain in how much computing power it consumes but in how much real value it creates. Tokens are cost tools for technological implementation, not badges to measure employee worth or corporate strength.

The consumption quagmire trapped tech giants in China and abroad warns the entire industry: Profligacy detached from actual output is built on sand; prosperity sustained by capital infusions cannot last. Only by abandoning deformed “consumption-first” evaluations, restoring tokens to their cost essence, returning AI technology to its productivity roots, and deeply linking computing investments with business results can artificial intelligence truly empower industrial development, emerge from the bubble, and march toward a healthy, sustainable future.

Source: Investor Network

Solemnly declare: the copyright of this article belongs to the original author. The reprinted article is only for the purpose of spreading more information. If the author's information is marked incorrectly, please contact us immediately to modify or delete it. Thank you.