AI unicorns are fleeing, the model+application path is no longer viable

08/08 2024 457

In recent days, Google's acquisition of Character.AI (hereinafter referred to as C.AI) has sparked much discussion.

Yesterday, I talked about how Google's acquisition was mainly about acquiring talent, with each person valued at $100 million. (One person worth $100 million! To acquire these 31 AI talents, Google spent $500 million)

Acquisitions like this, where the main asset is talent, have occurred twice more in the first half of this year: Microsoft's acquisition of Inflection and Amazon's acquisition of Adept.

In addition to their valuable talent, these three companies, which share a similar outcome, also have three common characteristics:

They were established early enough, raised a lot of money, and adopted a dual-drive strategy of both models and applications.

This is closely related to the development stage of the industry at the time. In the early days of the AI industry, training costs were not yet high, and there were no readily available open-source models on the market. Pre-training from scratch was the only viable path. Coupled with the explosive popularity of ChatGPT, everyone saw the enormous potential of models.

More and more people firmly believe that only companies that occupy both the model and application layers, known as "full-stack" companies, have the potential to capture the greatest value and are more likely to secure funding from investors.

However, after a year, the situation has undergone tremendous changes.

Training costs of billions of dollars make it difficult for startups to bear, and the declining price of models and the iteration of open-source models have not only rapidly devalued large models but also made most startups that bet on models pioneers who have fallen by the wayside.

As the saying goes, success and failure often stem from the same factors. The complete shakeup of the model layer is also an inevitable part of history. With the exit of these entrepreneurial stars, it also brings an enlightenment to the industry:

The strategy of startups pursuing both models and applications is being proven false.

/ 01 / Pioneers of the Model+Application Model

Looking at the development of C.AI, Inflection, and Adept, all three share three commonalities: they were established early enough, raised a lot of money, and adopted a dual-drive strategy of both models and applications.

Starting with their establishment dates, C.AI was the earliest, founded in 2021, while Adept and Inflection were both founded in 2022.

In terms of funding, Inflection raised the most money, with over $1.5 billion in two funding rounds combined. Although C.AI and Adept did not raise as much as Inflection, they also raised hundreds of millions of dollars. Specifically, C.AI has raised over $190 million in total, and Adept has raised close to $500 million in four funding rounds combined.

With all this money, all three companies chose the same development strategy: a dual-drive approach of both models and products.

C.AI, Inflection, and Adept are all well-known for their AI applications. C.AI, naturally, needs no introduction as its AI companion data runs the best in the industry, with June traffic ranking fourth in the industry.

Inflection also started as a chat product company. In May 2023, Inflection launched its AI chat product Pi. Compared to ChatGPT, Pi emphasizes privacy and intimacy. The introduction of Pi on its official account reads: "Pi, your personal AI, do you have anything on your mind? Let's talk about it!"

Adept's product positioning is as an AI assistant. Simply put, Adept aims to build a new operating system or platform that makes using computers more "user-friendly," where a single instruction can complete all steps and tasks, rather than the back-and-forth question-and-answer format of ChatGPT.

While developing AI applications, they also invested heavily in models.

Although C.AI's products have performed well, founder Noam still positions C.AI as a general model company, aiming to provide personalized superintelligence to everyone. Therefore, C.AI has invested significant resources in training the next generation of models to improve model quality.

Before its acquisition, Inflection also launched three models, with its most advanced model, Inflection AI-2.5, approaching GPT-4's performance.

In September 2022, Adept released its self-developed large model, Action Transformer (ACT-1). In January 2024, Adept further released the multi-modal large language model Fuyu-Heavy, enhancing its comprehensive analysis capabilities in both text and image processing.

These costly model technologies were also acquired by major companies along with the core teams. With the departure of the core teams and model technologies, these star companies have also announced their exit.

As these entrepreneurial stars exit the stage, they bring an enlightenment to the industry:

Small and medium-sized companies are rapidly withdrawing from competition in the basic model field, and the dual-drive strategy of startups pursuing both models and applications is being proven false.

/ 02 / Basic Models Are Rapidly Depreciating

Although it now seems unwise for startups to develop their own large models, this choice was not unreasonable at the time.

Firstly, they were established early enough when there were no open-source models like llama available, and training costs were not as exaggerated as they are now. Pre-training from scratch was the only viable path.

Secondly, with the explosive popularity of ChatGPT, OpenAI began its full-scale entry into the application layer, not only laying out a plugin ecosystem but also developing mobile applications, showcasing its ambition to be more than just an API provider. This not only posed a significant challenge to AI applications like Jasper but also highlighted the enormous potential of model companies.

From that point on, there was a consensus: In the face of large models, the commercial barriers for AI applications are limited. Only companies that occupy both the model and application layers, known as "full-stack" companies, have the potential to capture the greatest value.

In addition to the three companies mentioned above, many application companies have also begun to layout the model layer, including AI writing company Jasper, which was impacted by ChatGPT.

However, after a year, these startups discovered that the dual-drive approach of models and applications is not feasible for everyone.

On the one hand, the costs of upgrading basic models are increasing rapidly. Recently, The Information reported that OpenAI's losses this year could reach $4 billion, with over $3 billion spent solely on training costs, excluding $1.5 billion in employee costs. This means that only leading model companies backed by major corporations can keep up, while others cannot.

According to previous calculations, Character.AI's monthly inference costs are around $3.3 million, or $40 million annually. However, Character.AI's total revenue for last year was only a mere $15.2 million. In other words, the revenue generated by AI applications cannot even cover the inference costs, let alone the much higher training costs.

On the other hand, as model prices continue to decline, and open-source models become stronger, large models are rapidly depreciating.

In July, OpenAI suddenly launched a new model, GPT-4o mini, which outperforms GPT-3.5 Turbo in all aspects while being over 60% cheaper. More recently, Google reduced the price of Gemini 1.5 flash to half that of GPT-4o mini.

As closed-source models become increasingly affordable, open-source models are also becoming more capable. In July, Meta released the open-source model Llama 3.1 405B, which outperformed GPT-4o and Claude 3.5 Sonnet in multiple tests.

This means that the model costs for AI applications are rapidly declining. In the words of Benchmark partner Eisenberg, large models will be the fastest-depreciating assets in history. Among them, only one or two companies will make money for investors, while the rest will lose money.

Currently, the billions of dollars in training costs, declining model prices, and the rapid development of open-source models have made startups that bet on models pioneers who have fallen by the wayside.

Since the market does not need so many models, talent specialized in model technology has begun to flow back to large corporations. From this perspective, talent-focused acquisitions are also a "correction" of the distribution of AI industry resources.

Henceforth, the path for AI startups becomes clearer: shift to AI applications as soon as possible, refine products, and explore better commercialization paths.

The foreign model bubble has already begun to burst. Domestic players, however, still hold high the banner of integrating production and modeling, with a simple reason: it's necessary to do so to raise funds.

But will the situation be different in China? Only time will tell.

Written by Lin Bai

Solemnly declare: the copyright of this article belongs to the original author. The reprinted article is only for the purpose of spreading more information. If the author's information is marked incorrectly, please contact us immediately to modify or delete it. Thank you.