Computing Power Giants Line Up to 'Secure' Anthropic

04/19 2026 518

Unless one has been closely following the large model industry chain, many might not have immediately noticed Anthropic. Founded by a group of researchers who left OpenAI, the company has long placed AI safety, enterprise deployment, and reliable delivery at the center of its narrative. While it may not be as prominent as OpenAI in public discourse, among model companies, cloud platforms, and chip supply chains, Anthropic has become an increasingly significant name in recent years.

Precisely because it does not always stand in the spotlight, the recent news surrounding Anthropic over the past week is even more noteworthy when viewed collectively: Google and Broadcom announced an expanded partnership to prepare multi-gigawatt next-generation computing capacity for Anthropic; long-term supply arrangements for Google's custom processors also came to light simultaneously; CoreWeave signed a multi-year cloud capacity agreement; on the other hand, Anthropic is also evaluating the development of its own AI chips.

Individually, these announcements may seem like mere partnerships, but together, they resemble an emerging competition. Today, the focus around Anthropic is not just a few cloud orders but rather Google's TPU, AWS's Trainium, CoreWeave's NVIDIA cloud, and potential self-developed chips—originally separate supply lines—are now all vying to enter the long-term ecosystem of the same model company. Securing Anthropic is not just about immediate revenue; it is about whether a chip route or a cloud platform organization method can truly be integrated into the production environment of a leading model company in the coming years. Thus, Anthropic is no longer just an upstream customer but is becoming a reference point for the semiconductor industry to rearrange long-term supply.

01

Tech Giants Compete for Anthropic

To understand why Anthropic has suddenly become a coveted target for upstream players, one must first examine its commercial growth. In February 2026, Anthropic announced the completion of a $30 billion Series G funding round, valuing the company at $380 billion post-investment, and disclosed an annualized revenue run rate of $14 billion. By early April, the company stated that its annualized revenue run rate had exceeded $30 billion, with over 1,000 enterprise customers spending more than $1 million annually. This means that Anthropic's demand for computing power is no longer just a one-time peak during the training of a specific model generation but is tied to Continuously growing enterprise contracts (continuously growing enterprise contracts), paid subscriptions, and long-term product usage.

Therefore, the competition among Google, AWS, and CoreWeave around Anthropic is not just about sales but more akin to a battle over technological routes. On April 6, Google announced that it would work with Broadcom to provide multi-gigawatt next-generation TPU capacity for Anthropic. As early as October 2025, Anthropic planned to scale its use of Google Cloud technology to up to 1 million TPUs, with an expected addition of over 1 gigawatt of new capacity by 2026. Meanwhile, the long-term arrangements between Google and Broadcom for custom AI chips were further disclosed, including approximately 3.5 gigawatts of AI computing power for Anthropic. The industrial significance of this is not just that Google has secured a major client but that TPU architecture, originally designed more for Google's internal systems, is now advancing toward platform-based supply through external leading users like Anthropic.

AWS has similar ambitions. Anthropic has long regarded Amazon as its primary cloud service provider and training partner and continues to collaborate with AWS on Project Rainier. Project Rainier has deployed nearly 500,000 Trainium2 chips, and by the end of 2025, Claude will run on over 1 million Trainium2 chips. For AWS, Trainium must move beyond cost narratives or internal validation and enter the primary production environments of cutting-edge model companies. Whether Anthropic truly places critical workloads on Trainium is itself a direct real-world test of this chip route.

CoreWeave represents another external, flexible route more closely aligned with the NVIDIA ecosystem. Its multi-year agreement with Anthropic will prioritize serving Claude series model workloads. Combined with Anthropic's early evaluation of self-developed AI chips and its $50 billion investment with Fluidstack to build U.S. AI infrastructure—planning data centers in Texas and New York for its own workloads—a more complete picture emerges: Anthropic is organizing capacity across multiple existing platforms while also pushing computing power issues deeper into the data center and chip definition layers. For upstream supply chains, such customers are significant because they are not easily locked into a single platform but continually force different architectures, cloud platforms, and data center organization methods to compete simultaneously.

02

From Chip Demand to Infrastructure Demand

If Anthropic were merely a laboratory for training models, its importance to the semiconductor industry would likely remain limited to 'who can provide it with more GPUs.' However, what truly reshapes the issue is its product and customer structure, which is transforming upstream demand from short-term purchases into long-term, stable, and more complex infrastructure needs.

Anthropic explicitly states that Claude does not rely on advertising for monetization; its primary revenue comes from enterprise contracts and paid subscriptions. It aims to have Claude serve work, programming, and complex tasks rather than relying on user engagement for traffic revenue. This business model dictates that Anthropic's computing power consumption will increasingly stem from enterprise workflows rather than occasional consumer hits. By February 2026, Anthropic disclosed that eight of the Fortune 10 companies were already Claude customers, with the number of clients spending over $100,000 annually growing nearly sevenfold year-over-year. For the semiconductor industry, this means facing not a batch of short-cycle trial users but a group of major clients willing to integrate AI into their internal systems and pay sustainably.

The most typical example is Claude Code. In August 2025, Anthropic integrated Claude Code more deeply with Team and Enterprise plans while introducing enterprise capabilities such as Compliance API, organizational-level spending controls, usage analytics, policy settings, and file access restrictions. By February 2026, the company disclosed that Claude Code's annualized revenue run rate had exceeded $2.5 billion, with enterprise usage accounting for more than half of the revenue. This means that Claude Code is no longer just a 'code-writing' model feature but has begun entering the enterprise software layer, where budgeting, permissions, auditing, and organizational management must be taken seriously. For upstream players, the most critical change in this type of business is that demand becomes more sustained and reliant on stable delivery rather than spiking suddenly during model releases.

Anthropic's Managed Agents further advance this shift. This managed service targets long-cycle agent tasks, where the core issues are no longer just model inference itself but session persistence, tool invocation, sandbox isolation, credential security, and failure recovery. Combined with Claude Code's gradual transition from local interactive use to background long-running agent work, task durations are extending from minutes to longer intervals, requiring stronger remote environments, isolated workspaces, and multi-agent parallel support. The industrial implications are straightforward: the demands of leading model companies are pushing chip competition from 'who can provide training and inference cards' to 'who can provide long-running environments, isolated systems, and recoverable infrastructure.'

Enterprise distribution networks further solidify this demand. In March 2026, Anthropic launched the Claude Partner Network, committing $100 million to support consultancies and service providers in driving enterprise adoption, emphasizing that Claude was then the only cutting-edge model simultaneously available on AWS, Google Cloud, and Microsoft's three major cloud platforms. In December 2025, Anthropic and Snowflake announced an expanded partnership, signing a $200 million multi-year agreement under which Claude would cover over 12,600 global customers through multi-cloud platforms; thousands of Snowflake customers were already processing trillions of Claude tokens monthly via Snowflake Cortex AI. In January 2026, Allianz entered a global partnership with Anthropic, integrating Claude into its internal AI platform, with Claude Code covering thousands of developers worldwide and participating in highly compliant automation processes in the insurance industry. Viewed together, Anthropic's value to upstream players is not just 'large volume' but 'clear shape': its models are entering data platforms, development workflows, and highly regulated workstreams, requiring upstream players to prepare for multi-year capacity, stable fulfillment rates, and complex deployment environments in advance.

03

Comparison with OpenAI

Taking a step back, one realizes that Anthropic's importance lies not in whether it resembles another OpenAI but in that it provides an alternative model for organizing upstream supply. OpenAI's current actions resemble direct organization of infrastructure itself. The 'Stargate' initiative, jointly advanced by SoftBank, OpenAI, and Oracle, expects OpenAI to purchase $300 billion in computing power from Oracle over roughly five years. As a new round of expansion progresses, the Stargate capacity under development by OpenAI has exceeded 5 gigawatts, capable of running over 2 million chips, and continues to push the construction of U.S.-based data centers to an even larger scale.

At the chip level, OpenAI is also moving deeper downstream. It plans to launch its first AI chip in collaboration with Broadcom in 2026, prioritizing internal use; meanwhile, it continues to use AMD and NVIDIA chips to diversify supply and reduce costs. Another adjustment shows that OpenAI is not simply expanding capacity indefinitely but continuously rearranging capital rhythm (capital rhythms), site arrangements, and hardware routes, transferring some originally planned new capacity to other campuses for fulfillment. In other words, OpenAI appears to be actively positioning itself at the center of data centers, chip routes, and capital expenditures.

Anthropic's path differs. Instead of personally orchestrating an entire infrastructure landscape like OpenAI, it leverages enterprise revenue, multi-platform deployment, product integration into workflows, and long-term capacity contracts to make Google's TPU, AWS's Trainium, CoreWeave's GPU cloud, and potential self-developed chips compete around itself. While these two paths appear different on the surface, the signal they send to computing power vendors is consistent: what will determine the next stage of success is no longer just single-chip performance or how many cards a single cloud provider has but who can better organize chips, racks, cloud platforms, data centers, enterprise products, and long-term client contracts into a reliably delivered system.

04

Conclusion

Viewing Anthropic merely as another star model company alongside OpenAI would underestimate the significance of these changes. Its true importance lies in having transformed itself into a scarce type of customer: one capable of continuously consuming the most advanced computing power while embedding models into enterprise software, development workflows, and highly compliant industries, all while retaining the ability to redistribute workloads across multiple infrastructure routes. For Google, AWS, CoreWeave, and even potential self-developed chip projects, Anthropic is not just a revenue source but a critical case for validating technological routes, securing production capacity, and competing for future standards.

Viewed alongside OpenAI, the picture becomes clearer. OpenAI is moving deeper upstream, directly organizing data centers, capital expenditures, and chip routes; Anthropic, on the other hand, is pushing back upstream from downstream enterprise usage, transforming itself into a long-term client that multiple supply routes must compete for.

In the next phase of computing power supply, the most valuable asset may no longer be just more advanced chips or tens of thousands of extra cards in a single cloud provider's hands but who can reliably bundle chips, clouds, data centers, and real customer needs into a long-term supply system. The competition surrounding Anthropic is merely the earliest and clearest preview of this realignment.

Solemnly declare: the copyright of this article belongs to the original author. The reprinted article is only for the purpose of spreading more information. If the author's information is marked incorrectly, please contact us immediately to modify or delete it. Thank you.