09/18 2024 495
Cold Reflection Amid AI Investment Frenzy: Rumors Surrounding Baidu and Challenges in Monetizing the Industry; Should we continue burning money on R&D for general-purpose large models, or accelerate the commercial application of AI? Large model vendors and investors alike are grappling with this dilemma.
In the nascent stage of large AI models, even the slightest disturbance seems to evoke unprecedented market fluctuations.
Recently, Morgan Stanley released a report stating that China's AI industry is facing greater monetization challenges. The report directly pointed out that the performance of AI pioneers fell short of expectations, with disappointing revenue growth for Kingsoft Office and Wondershare after the launch of their AI products.
Subsequently, multiple media outlets rumored that due to massive capital investments and underwhelming commercialization, Baidu might abandon its research and development of general-purpose large models. This news triggered significant market volatility, prompting the head of Baidu Wenxin Yiyan's marketing department to swiftly deny the rumors. The official stated, "Wenxin Yiyan has just completed a comprehensive upgrade of its functions, and we will continue to increase R&D investments in the field of general-purpose large models."
However, a simple calculation reveals the current embarrassment faced by the large model race, which seems quite apparent.
At the end of June, Goldman Sachs published an article titled "Too Much Investment, Too Little Return," bringing the discussion of AI bubbles to the forefront. The article bluntly stated that major companies plan to invest $1 trillion in AI-related endeavors over the next few years, including data centers, chips, and power grids. Yet, to date, these investments have primarily improved developer productivity with little else to show for it.
Sequoia Capital's conclusion appears even more straightforward. In a report by its analyst David Kahn, it is believed that the AI industry bubble is intensifying, requiring an annual output value exceeding $600 billion merely to cover AI infrastructure expenses such as data centers and accelerated GPU cards. In a previous analysis, Kahn assumed that each year, Google, Microsoft, Apple, and Meta could generate $10 billion in new AI-related revenue. Meanwhile, Oracle, ByteDance, Alibaba, Tencent, X, and Tesla could each produce $5 billion in brand-new AI revenue. Even so, the gap in AI profitability needs continues to widen.
Turning to the domestic scenario, after significant price cuts earlier this year, the stance of major companies towards large models seems to have become ambiguous. Many have vowed during earnings calls to increase investments in AI, yet in reality, these investments have become more cautious. The most notable sign is that executives from these companies have started to downplay iterative updates to foundational large models while emphasizing the importance of application deployment. "Without applications, open-source or closed-source models are worthless," they assert. Indeed, mature applications like text-to-image and text-to-video generation have emerged as unified directions.
However, it's crucial to recognize that due to objective constraints, the simplest monetization methods, such as OpenAI's $20-25 monthly subscription for GPT, are nearly impossible to replicate domestically. Business models based on API scheduling have also been squeezed into thin profit margins. Furthermore, the timeline and efficiency of AI application deployment by major companies have fallen far short of expectations. Faced with increasingly substantial investments and uncertain returns, anxiety among these companies seems to be on the rise.
On the other hand, as the path towards AGI progresses, consensus is starting to fray. OpenAI's new O1 model, which employs self-play reinforcement learning, introduces a fresh twist compared to traditional scaling law-based training methods. Meanwhile, domestic large models are still striving to catch up with GPT-4, only to find themselves confronted with emerging new paradigms.
Caught between these dual pressures, rumors have gained traction, signaling a pivotal moment for domestic large models. The question now is whether to double down on investments or wait for the technology curve to flatten before leveraging late-mover advantages. This decision could shape the future competitive landscape. While major companies can still rely on large models to empower their scenarios and safeguard their turf, advancing further necessitates clear answers to a series of pressing questions.
Part.1
An Increasingly Expensive 'Game'
From every angle, AI is becoming a game for the wealthy.
According to China Business Network, during recent quarterly earnings calls, Google, Microsoft, and Meta all emphasized significant investments in AI. Meta raised its spending forecast for this year by up to $10 billion, while Google plans to invest approximately $12 billion per quarter in capital expenditures. Microsoft spent $14 billion in its most recent quarter and anticipates a "significant" increase in this expenditure. Taking data centers as an example, according to Synergy Research Group, an American market research firm, 120-130 hyperscale data centers are expected to go online annually in the future, with each costing hundreds of millions of dollars to construct.
Concurrently, Bloomberg reported that OpenAI is in talks to raise $6.5 billion at a valuation of $150 billion and plans to secure $5 billion in debt financing through a revolving credit facility.
This funding round will be led by Thrive Capital, with participation from Microsoft, OpenAI's largest investor, and ongoing negotiations with giants like Apple and NVIDIA.
The exorbitant costs of the AI and large model race stem not only from the accelerating burn rate but also from the reality of short-term profitability challenges. Reports indicate that OpenAI generated over $3.4 billion in annual revenue at the start of 2024. However, due to the capital-intensive nature of AI and intensifying competition, OpenAI remains in the red. Industry estimates suggest that by the end of 2024, OpenAI's losses could approach $5 billion.
The enormous capital requirements and reliance on funding virtually dictate the AI sector's desperate need for commercialization. A harsh reality is that companies unable to secure funding risk acquisition by larger players. On August 3, Google officially announced the acquisition of Character AI and its team, while Adept and Inflection AI were recently sold to Amazon and Microsoft, respectively.
Similarly, domestic AI companies are struggling amidst pressure. The latest news indicates that Baichuan and Dark Side of the Moon have each secured hundreds of millions of dollars in funding, valuing them at over $20 billion. However, in terms of actual profitability, these star unicorns appear to have underwhelming performances.
Unlike their aggressive overseas counterparts, domestic giants' attitudes towards AI seem to be shifting. Recovering from early FOMO (fear of missing out), domestic companies are gradually refraining from exaggerated investments and instead focusing on AI application and commercialization.
Post-price wars, giants with smart cloud services and scenario advantages have embarked on new explorations based on these strengths. Take Alibaba as an example. According to insiders, Alibaba Cloud imported a significant number of NVIDIA A-series and H-series GPUs before the tightening of chip sanctions, totaling over 100,000 units (including those for its overseas branches). While some are used for internal training, most are rented out to external platforms. Investors directly stated that "many cloud vendors offer large model services for free," revealing the current state of the race.
Meanwhile, Douyin's rising popularity has highlighted ByteDance's product strengths. The recent launch of Douyin Search has sparked speculation. Additionally, insiders revealed that besides the Hunyuan large model, Tencent has developed a separate large model within WeChat, distinct from the Hunyuan team. Currently accessible via WeChat Search, this model sometimes summarizes search content (in a gray-scale test, not guaranteed for all users). Half of its backend is based on Hunyuan, while the other half belongs to WeChat.
In the face of this increasingly expensive game, giants have opted not for massive investments but rather for empowering their businesses. Commercialization aspirations and restrained investments may become the primary strategies for giants in the AI and large model race for some time.
Viewing Baidu's rumors from this perspective offers a different lens. As a search-focused company deeply impacted by AI, Baidu faces greater pressure. Financial reports reveal that in the second quarter, Baidu's primary revenue source, online advertising, declined 2% YoY to RMB 19.2 billion. Meanwhile, Baidu Cloud generated RMB 5.1 billion in revenue, with 9% stemming from external demand for large models and generative AI services. Clearly, AI's empowerment of Baidu's businesses has fallen short of expectations.
According to WeMedia CityBeat, Baidu's internal stance appears consistent with mainstream manufacturers: "Training the next-generation model is not Baidu's top priority." However, "The boss has made it clear that we will not back down."
While AI offers promise, a pure ROI perspective suggests caution against all-in bets. The crucial variable in breaking the current stalemate, regardless of objective constraints, lies in AI applications.
Part.2
The Dilemma and Headwinds of AI Applications
The buzz around AI applications has persisted since the beginning of the year.
Zhu Xiaohu, a prominent investor known for his quotable insights, stands as one of the flagbearers of this AI application wave. In a speech early in 2024, he predicted an AI application explosion, asserting that profits from applications could multiply tenfold at the end of each cycle compared to earlier stages. As a successful investor behind companies like Didi and Ele.me, Zhu's logic is straightforward: large models have poor business models, with future profitability hinging on AI applications.
"For each new model generation, you need to reinvest heavily, and your payoff cycle might only be two to three years—even worse than a power plant," Zhu remarked, expressing pessimism towards large models but praising AI applications that directly monetize from users.
Another advocate for AI applications is Baidu's founder, Robin Li. In a July speech, he emphasized the importance of "rolling out applications" for large models: "Without applications, a foundational model, whether open-source or closed-source, is worthless."
While both Zhu and Li support AI applications, their approaches differ. Zhu favors clear product-market fit (PMF) applications with direct B2B monetization potential, such as NearAI (focused on AI interviews) or FancyTech (specializing in vision-based products). In his words, "China's software market grew slowly due to long sales cycles of six to twelve months. But now, if enterprise users experience a 'wow' effect, the monetization cycle can be swift, from initial introductions to signed contracts within a couple of months."
In contrast, Li favors agent intelligence, envisioning millions of intelligent agents tailored to specific industries like healthcare, education, finance, manufacturing, transportation, and agriculture, leveraging their unique scenarios, experiences, rules, and data to form a vast ecosystem.
However, Zhu remains skeptical about agent intelligence. In an interview, he admitted that AI agents may not materialize due to the inherent flaws of large models. "Large models inherently have illusions, with single-step error rates potentially reaching 10-20%. After five reasoning steps, the error rate could exceed 50%, rendering them unusable. Even a 20-30% error rate is unacceptable and fails to address the root issue."
The debate over AI application routes continues. On a broader scale, AI applications seem to be encountering headwinds. Morgan Stanley's report titled "China's AI Faces Greater Monetization Challenges" stated that AI application development has lagged expectations, with monetization proving even more challenging.
The report revealed that amid the macroeconomic backdrop, businesses and consumers are reluctant to accept price hikes associated with AI features. Concurrently, competition from free AI services intensifies profit pressure. Moreover, AI products often fall short of customer expectations due to factors like insufficient high-quality domain data, suboptimal performance in specific scenarios, and immaturity. This value realization impediment persists both domestically and internationally, with industry leaders underperforming and AI monetization remaining elusive. U.S. software companies have disappointing year-to-date results, with limited AI-related revenue contributions. AlphaWise surveys show that CIOs continually push back their AI application rollout timelines.
Domestically, the disappointing revenue growth of Kingsoft Office and Wondershare after launching AI products underscores intense competition in basic AI functionalities and the immaturity of advanced applications. Both companies vow to increase R&D investments, signaling uncertain profitability prospects.
In summary, AI applications are still in their exploratory phase. Both giants and investors seem to be tapping into their potential, yet consensus remains elusive. From a monetization perspective, while investors might find safer business models, technological advancements continue to introduce variables. Currently, the explosion of AI applications appears distant, suggesting that business plans reliant on such an explosion are delayed.
Part.3
The Increasingly Complex Route Controversy
On September 13, Open AI released a preview version of its new generation of large model, internally codenamed "Strawberry," which sparked an instant sensation in the industry upon its release.
Judging from the feedback on the model's effectiveness, this product named Open AI o1 can be described as groundbreaking. As the first large model with "reasoning" capabilities, it can gradually analyze problems through a human-like reasoning process until the correct conclusion is reached.
According to evaluations on the OpenAI official website, this model excels at handling mathematical and coding problems, and even surpasses human Ph.D. level accuracy in benchmarks for physics, biology, and chemistry problems.
Feedback from industry insiders suggests that the emergence of Open AI o1 may represent a shift in the AGI paradigm in Silicon Valley. After encountering bottlenecks such as limited improvements in computing power and parameters, many star companies in Silicon Valley have shifted their resources to a new path: self-play RL (self-play reinforcement learning). Open AI o1 appears to be such a product.
Specifically, according to insiders, the o1 model repeatedly "samples" among different possibilities, deriving a better result each time. For example, if you ask it a complex math question, it won't give you an answer in a second. Instead, like a thoughtful person, it will break down the problem into several steps and reason through them one by one. The benefit of this approach is that the answers are usually more accurate and logical, especially for scientific reasoning, programming, and math problems. For instance, in the International Mathematical Olympiad, o1 achieved an 83% accuracy rate, while the previous GPT-4o only scored 13%. This demonstrates a qualitative improvement in handling complex problems.
Since the beginning of this year, there have been transformations in Silicon Valley's AI community, with the emergence of multimodal and 100,000-card cluster supermodels, as well as self-play reinforcement learning. After the consensus was broken, there is still no standard answer at present.
Based on current information, it seems that the original consensus on achieving AGI is being broken. Although the o1 model still has many questions, from the perspective of AGI evolution, multiple possibilities seem to be emerging. While this is a good thing for Silicon Valley, which has obvious advantages in funding and talent, it poses greater challenges for followers.
From a final perspective, domestic AI will inevitably face another choice of technological direction in the future, and larger-scale investment also seems imperative. While burning money to fight may not happen immediately, it will be crucial for future success.
From this perspective, companies that cannot generate revenue on their own are more likely to be eliminated. Domestic giants can continue to maintain a follow-up strategy through cash cow businesses, but gradually increasing costs and unclear monetization paths may become stumbling blocks to development. Similar to the rumors facing Baidu today, it seems to be an issue that needs to be addressed. However, the solution is still unclear.