Understanding Google Last Year, Meta This Year

03/02 2026 402

Over the past year, the market has directed significant attention toward Google, which is regarded as one of the most successful models in the AI era, thanks to its vertically integrated ecosystem built around self-developed TPU chips and the Gemini large-scale model. However, as we enter 2026, a more noteworthy case is emerging—Meta.

If Google's success represents a decade of accumulated strength coming to fruition, then Meta's comprehensive, aggressive, and highly pragmatic strategy unveiled in just one year more directly foreshadows the next phase of the AI competition's industrial landscape.

In early 2026, Meta announced a series of large-scale investment plans, clearly showcasing its layout (translated as "layout" or "strategic positioning") in the field of artificial intelligence. By signing large-scale chip procurement agreements with NVIDIA, AMD, and Google, constructing gigawatt-scale data centers, adhering to the open-source path of the Llama large-scale model, and achieving initial success in the AI smart glasses market, Meta is building a complex AI business landscape.

This time, we will delve into Meta's AI "empire," attempting to interpret this giant model that differs from Google's, while also exploring the opportunities for Chinese suppliers within it.

01

Gigawatt-Scale Data Centers as the Foundation

Entering 2026, Meta's investment in AI infrastructure can be described as "frenetic." The company expects its annual capital expenditures to soar to between $115 billion and $135 billion, representing a nearly 73% increase from 2025. Behind this enormous capital outlay lies a clear and aggressive strategy: to establish powerful, even redundant, moats at every critical level of AI to support its grand vision of ultimately achieving "personal superintelligence."

Meta's AI strategy is built upon a vast physical infrastructure. It is constructing ultra-large-scale data centers designed specifically for AI on a global scale, with their planned power no longer measured in megawatts but entering the gigawatt era. Currently, Meta has at least three gigawatt-scale AI data center campuses under construction or in planning, including a 1-gigawatt project in New Albany, Ohio, a 5-gigawatt project in Louisiana, and a $10 billion-plus investment in a 1-gigawatt campus in Indiana.

A gigawatt-scale data center entails millions of GPUs, millions of optical modules, tens of thousands of server racks, and a vast array of servers, switches, cables, and power equipment, comprehensively driving demand across the entire infrastructure supply chain. According to McKinsey's analysis, 60% of AI data center spending goes toward chips and computing hardware. The total expenditure on AI data center infrastructure by the four largest U.S. hyperscale cloud providers (Amazon, Meta, Google, and Microsoft) in 2026 is expected to reach $700 billion, creating unprecedented demand across the supply chain.

Meta's enormous capital expenditures have first ignited the optical communications market. To connect millions of GPUs, data centers require a massive volume of 800G or even higher-speed optical modules for data transmission. As global leaders in the optical module market, China's Zhongji Innolight and YOFC are core suppliers to Meta. Additionally, Meta's fiber optic cable supply agreement with Corning, worth up to $6 billion, sends a clear signal: the bottleneck in AI infrastructure is shifting from computing power itself to more fundamental connectivity links.

At the data center infrastructure level, Chinese suppliers have already secured a global leading position in areas such as optical modules, server OEM, and PCB boards, becoming indispensable choices for overseas giants like Meta when expanding their data centers.

An expert in the domestic data center power supply and distribution field told reporters that customer demand has shifted from single-point deployments in traditional small-scale data rooms to intensive construction of gigawatt-scale campuses in recent years, accompanied by key iterations in the selection of core power supply and distribution equipment. The expert pointed out that market requirements for delivery timelines have become extremely stringent, with some projects even demanding overall delivery times compressed to within three months and equipment supply cycles controlled to within 45 days.

02

Diversified Chip Supply

At the chip level, Meta has constructed a diversified computing power portfolio centered on external procurement supplemented by self-developed chips. Behind this strategy lies an extreme pursuit of cost, efficiency, and supply security. Meta's current computing power sources consist of four parts: NVIDIA GPUs, AMD custom GPUs, Google TPUs, and self-developed MTIA chips, each corresponding to different supply chain logics.

Its multi-year, multi-generation strategic cooperation with NVIDIA ensures access to top-tier GPUs for training cutting-edge large-scale models, but the high costs and risks of concentrated supply persist. Therefore, Meta's shift toward AMD and Google essentially involves supporting NVIDIA's competitors to enhance its bargaining power and reduce supply chain risks through diversified suppliers. The multi-year, up-to-6-gigawatt custom GPU procurement agreement with AMD provides a cost-effective solution for Meta's massive AI inference demands. Meanwhile, a newly reached multibillion-dollar TPU leasing agreement with Google further strengthens its diversified supply strategy.

The latest setback for its self-developed MTIA chip underscores the importance of external supply chains even more. According to reports, Meta has abandoned its most advanced self-developed AI training chip due to design challenges. This setback precisely explains why Meta signed three large orders with NVIDIA, AMD, and Google in a short period—blocked on the self-development route, it must ensure stable computing power supply through strengthened external procurement.

This also solidifies the positions of Broadcom, which collaborates with Meta on MTIA development, and TSMC, which provides manufacturing services for all these chips, within the supply chain. Notably, AI chip performance depends not only on design but also heavily on advanced packaging technology. TSMC's CoWoS packaging technology represents a key bottleneck in current high-performance GPU manufacturing, with its capacity directly determining shipment volumes for vendors like NVIDIA and becoming a supply chain reality that clients like Meta must confront.

03

Open-Source Models and AI Glasses

At the model level, Meta has chosen an open-source path distinct from OpenAI and Google. Since its release in 2023, Meta's Llama series models have consistently ranked among the most popular in the open-source community. By opening model weights, Meta has attracted global developers and researchers to participate in model improvement and application development, accelerating technological iteration and forming a vast "Llama ecosystem." Its business model drives growth in its core social and advertising businesses by providing AI models free of charge while supporting its future AI hardware, ultimately achieving a commercial closed loop across the entire ecosystem.

Meta's AI layout (translated as "strategic positioning") is not limited to the cloud; it is extending AI capabilities to consumer endpoints. Among its most successful products is the AI smart glasses developed in collaboration with EssilorLuxottica, the parent company of Ray-Ban. With over 7 million units sold in 2025, far exceeding market expectations, this product marks the first preliminary recognition of AI wearables by the mass market. According to EssilorLuxottica's financial report released in February 2026, sales of this product line tripled year-over-year.

The success of AI glasses has added a new dimension to Meta's semiconductor strategy. It is no longer merely a procurer of data center chips but also an important client for consumer-grade AI chips. According to a report by Bank of America Securities, over 80% of the global AI glasses supply chain is located in China. Chips and optical modules represent its two core components, accounting for over 70% of total costs.

In the assembly link , Goertek serves as Meta's core OEM partner; at the core chip level, Bestechnic exclusively supplies its audio chips, while Pibi Memory provides some storage solutions; in the upstream optical field, companies such as Crystal-Optech and Sunny Optical supply key optical modules. Without the deep involvement of the Chinese supply chain, Meta's AI glasses could not have been brought to market with such high efficiency and relatively controlled costs. Foreign media have commented that Meta has no choice but to collaborate with Chinese factories, as they represent the most stable and reliable suppliers of critical components. Behind this dependence lies the vast manufacturing ecosystem and rapidly responsive supply chain management capabilities accumulated by China during the past two decades of consumer electronics waves.

An expert in the domestic storage chip field told reporters that the system complexity of AI wearables far exceeds that of traditional consumer electronics, imposing stricter requirements on storage chips in terms of capacity, read/write speed, and power consumption control. This is driving storage chips toward higher-performance iterations. While overseas major clients still primarily rely on Taiwanese vendors for high-end storage products, mainland vendors have achieved multiple successful cases of supply chain integration for overseas major clients through years of technological deep cultivation (translated as "deep cultivation") and market expansion, possessing substantial opportunities to penetrate AI wearables like Meta's glasses.

04

Meta vs. Google: Two Approaches

In contrast to Meta's diversified procurement model, Google's strategy in the AI chip field leans more toward "vertical integration." As early as 2015, Google released its self-developed TPU. Its core lies in achieving deep synergistic optimization from underlying chips to upper-layer applications through self-developed core hardware, keeping key supply chain links in its own hands. This model has an extremely high threshold, requiring long-term technological accumulation and sustained investment, but once successful, its advantages in cost, efficiency, and supply chain security become more pronounced.

Meta and Google represent two distinct approaches among tech giants in constructing computing power infrastructure. Google's "vertical integration" pursues high-quality internal efficiency and cost control, building a closed but efficient system through hardware-software integration, with the ultimate goal of strengthening the competitiveness of its cloud services (GCP). In contrast, Meta's "diversified procurement" resembles a pragmatic, externally oriented resource integration strategy. It does not seek complete control over the supply chain but ensures its bargaining power and supply security by supporting multiple suppliers, ultimately serving its social, advertising, and metaverse businesses rather than providing cloud services externally.

Recent developments have made this competitive-cooperative relationship even more nuanced—Meta's agreement to lease Google TPUs means that, to break the monopoly of a single supplier, former competitors can also choose collaboration at the AI computing power level. This indicates that, whether for Meta or Google, the ultimate goal is to build a powerful AI ecosystem. Meta's strategy objectively provides significant market space for NVIDIA's competitors (such as AMD and Google), accelerating diversified competition in the AI chip market. Meanwhile, its success in AI glasses has also driven growth in the new market for consumer-grade AI hardware, creating new opportunities for companies across the relevant supply chain.

05

Conclusion

Through its massive capital expenditures and a comprehensive strategy covering "cloud-network-endpoint," Meta demonstrates its determination to establish leadership in the AI field. From gigawatt-scale data centers to a diversified computing power portfolio encompassing NVIDIA, AMD, Google, and even self-developed chips, from the open-source Llama large-scale model ecosystem to AI smart glasses, every step Meta takes aims to build a complete closed loop (translated as "closed loop") from infrastructure to user experience.

This strategy not only provides market space for NVIDIA's competitors but also drives growth in the new market for consumer-grade AI hardware, influencing the global semiconductor competitive landscape. For Chinese enterprises deeply involved, whether as suppliers of data center infrastructure or core component suppliers for AI consumer electronics endpoints, this represents an opportunity to enhance their technological capabilities and expand globally. However, the long-term challenge they face is how to move up the value chain while enjoying short-term dividends and establish autonomous core technological barriers.

Solemnly declare: the copyright of this article belongs to the original author. The reprinted article is only for the purpose of spreading more information. If the author's information is marked incorrectly, please contact us immediately to modify or delete it. Thank you.