05/23 2024 465
Source: Dongjian Xinyanshe
From ChatGPT to Sora, from text and images to videos, from general large models to vertical large models... After over a year of exploration, large models have entered the second phase where deployment takes precedence.
The turbulence in the industry intersects with the fervor of capital, not only fostering the down-to-earth approach of believers but also giving rise to opportunists fishing in troubled waters. Looking back now, how is the deployment of large models progressing, and how are large model vendors practicing?
Today, we select four leading cloud computing vendors - Alibaba, Baidu, Tencent, and Huawei - to gain insights into the divergence of paths for large model deployment through their explicit or implicit strategic routes.
01 Alibaba Cloud: Simultaneous Closed-Source and Open-Source Development
At the recently concluded Alibaba Cloud AI Leadership Summit in Beijing, Alibaba Cloud CTO Zhou Jingren directly revealed their "hand": "Alibaba Cloud is the only company in the world that continues to develop large models while also open-sourcing a large number of them."
The reason for this dual approach, according to Zhou Jingren, is to meet the diverse needs of users and developers for basic models, which is also one of the connotations of "Model-as-a-Service".
In concrete practice, Alibaba Cloud released the closed-source SOTA large model Tongyi Qianwen 2.5 at this conference. According to the evaluation results of the authoritative benchmark OpenCompass, Tongyi Qianwen 2.5 scored on par with GPT-4 Turbo, achieving the highest ranking for domestic large models.
On the open-source front, since August 2023, Alibaba Cloud has gradually open-sourced more than a dozen models. According to official data, the download volume of Tongyi open-source models has exceeded 7 million, and the latest open-source 110 billion parameter model has achieved the best scores in multiple benchmark evaluations, surpassing Meta's Llama-3-70.
While there are basic models, the needs of different industries vary, and even within the same industry, the needs of different enterprises are difficult to unify. Therefore, standardized basic models are difficult to use directly. To this end, Alibaba Cloud, in line with the scene needs of the developer ecosystem, has upgraded its AI Infra platform - Bailian, releasing Bailian 2.0.
Relying on Alibaba Cloud's AI infrastructure, Bailian 2.0 has also upgraded tools such as model development, application development, and computing power base, introducing more models and pioneering compatibility with open-source frameworks such as LlamaIndex. Enterprises can freely replace capability components to adapt to their own systems.
As the largest cloud vendor in China, Alibaba Cloud has the largest business scale and the most comprehensive customer base. In the context of an industry where AI routes are far from converging, on one hand, it is driven by customer needs, and on the other, it faces concerns about technological risks. Alibaba Cloud's comprehensive layout in large models can thus be understood.
If we correlate this with Alibaba Cloud's historically largest price cut in March, it can be seen that all of Alibaba Cloud's actions point to one goal: to achieve a spiral rise in cloud and AI businesses, implying a long-term consideration for future business growth.
02 Baidu Cloud: AI Native Applications as the Spearhead
Among several cloud computing giants, Baidu Cloud has the smallest scale. However, due to the strong correlation between large models and Baidu's consistently adhered AI technology route, coupled with the launch of Wenxin Yiyan in March last year, followed by a series of systematic initiatives in areas such as large model tool platforms, large model reconstruction of its own applications, and large model ecosystem construction, it has become an extremely important force in the large model industry.
Baidu's approach is not complicated. On the one hand, through continuous evolution of Wenxin Yiyan, it ensures that Wenxin Yiyan's capabilities always remain at the forefront. The parameter scale of the Wenxin large model 4.0 version launched at last year's Baidu World Conference reached the trillion-level, with comprehensive capabilities comparable to GPT-4.
On the other hand, it emphasizes the importance of native applications. Baidu founder Robin Li analyzed that the essence of competition in large model applications is: "Competition among enterprises is not about big fish eating small fish, but fast fish eating slow fish. Making decisions faster than competitors is likely to win." This is in fact Baidu's competitive strategy for large model deployment: accelerating sprints, exploring multiple application possibilities, and placing particular emphasis on "AI native applications".
Baidu first comprehensively transformed and refreshed its products using large models, obtaining real usage feedback based on existing user bases, which in turn accelerated the iteration of large models. Then, combined with cloud services, it provides intelligent computing resources and training tools to help other enterprises develop their own models.
To this end, Baidu Intelligent Cloud has launched a series of platforms or tool products, such as "Qingduo" for helping generate marketing materials, "Lingjing," the Wenxin large model plugin development platform, and "Qianfan," the enterprise-level large model production platform.
At Baidu Cloud's first Ecological Conference held this spring, it announced a clarification of the division of labor and collaboration boundaries with partners, aiming to achieve collaborative operations for three types of markets - head markets, value markets, and high-potential markets. The goal is to rapidly close the loop of scenarios and accelerate the deployment of large models.
03 Tencent Cloud: Pragmatism in Driving the Real Economy
Tencent entered the large model field relatively late, officially releasing its self-developed general large language model, Hunyuan, at the Tencent Global Digital Ecosystem Conference in September last year. It has always been a relatively low-key and alternative presence in the industry.
Before the release of Hunyuan, Tang Daosheng, Senior Executive Vice President of Tencent Group and CEO of the Cloud and Smart Industries Group, elaborated on Tencent's large model values: "General large models are not necessarily the optimal solution to meet industry scenario needs. Enterprises need targeted industry large models, combining their own data for training or fine-tuning, to create more practical intelligent services at a reasonable cost."
Pragmatism is thus distilled from this.
On the one hand, it focuses on large models solving specific problems rather than parameter size, and on the other hand, it considers what technical combination is more efficient in solving problems.
Wu Yunsheng, Vice President of Tencent Cloud and Head of Tencent Cloud Intelligence, once said: "Whether it's hundreds of millions, billions, tens of billions, or over a trillion, we are not concerned with the number of model parameters. We are more concerned about how to solve customers' problems, hoping to use the most effective and lowest-cost means to do so."
In concrete practice, Tencent Cloud has a very clear line of thinking. The first step is to anchor标杆客户; the second step is to radiate to mid-sized enterprises across the entire upstream and downstream industry chain, with the key to breaking the deadlock being the real economy.
Currently, Tencent Cloud has collaborated with leading enterprises in industries such as government affairs, exploring over 50 industry large model application solutions across more than 20 industries. Taking Tencent Cloud's Digital Intelligence Factory as an example, Tencent Cloud's MaaS capabilities can shorten the replication time of digital intelligence avatars to 24 hours, significantly reducing costs. The cultural tourism large model launched in the cultural tourism field and the OCR large model created in the financial field are already implemented cases.
04 Huawei Cloud: Deepening into All Industries Based on Computing Power Infrastructure
Unlike many large model vendors who initially focused on technology and benchmarked against ChatGPT's capabilities, Huawei Cloud's large model strategy has always been aimed at deployment from the start. At last year's World Artificial Intelligence Conference, Hu Houkun, the rotating chairman of Huawei, said in his speech that the key to the development of artificial intelligence lies in "going deep and becoming practical." Huawei's positioning is to empower industrial upgrading, serve all industries, and serve scientific research.
Around this positioning, Huawei's large model strategy has emerged with two paths: one is in the field of large models, from general large models to industry large models, enabling artificial intelligence to empower industries and assist scientific research; the other is in the field of computing power, creating a robust computing power base.
When Huawei Cloud's Pangu large model 3.0 was released, it proposed the slogan "No Poetry, Just Action." Following the "5+N+X" three-tier architecture, which includes basic models, industry models, and scenario models, it has already landed in more than 10 industries such as finance, manufacturing, government affairs, coal mines, and railways, supporting the AI application landing of over 400 business scenarios.
In a typical scenario, during the intelligent upgrade of coal mines, the Pangu Mine large model only needs to import massive unlabeled mine scene data for pre-training to conduct unsupervised autonomous learning. One large model can cover more than 1,000 sub-scenarios under the business processes of coal mining, excavation, machinery, transportation, ventilation, and washing. Currently, the Pangu Mine large model is being used in eight mines nationwide.
In strengthening the computing power base, Huawei's keywords are "independent research and development" and "openness."
In terms of relatively low-level computing efficiency research, Huawei's approach is architectural innovation. Based on its self-developed Da Vinci architecture, it has launched the Ascend processor and built Ascend AI clusters around Ascend chips. The largest AI computing cluster in China, Shenzhen Pengcheng Cloud Brain II, not only achieves full-stack hardware and software autonomy but also retains its championship in multiple global AI performance rankings, with a computing power of 1000P.
It is not difficult to see that Huawei not only directly sells "fish" but also teaches "fishing" skills.
05 Conclusion
In summary, although these four cloud computing giants have different emphases in their approaches to large model deployment, their goals are very consistent. Basically, they are extending on the basis of their original businesses, either continuing to strengthen their strengths or developing new increments.
It is worth mentioning that in addition to competing on the technical and business levels of large models, these major players are also widely involved in investing in large model startups. Currently, among the top five AI unicorns in China (Moon's Dark Side, Zhipu AI, Minmax, Zero One Everything, and Baichuan Intelligence), Alibaba's participation rate is 100%, while Tencent has invested in Baichuan Intelligence, Zhipu AI, and MiniMax. In addition, large model companies such as Shenyan Technology and Wuwen Xinqiong have also been included in Tencent's investment list.
This means that the competition in large models does not only stay at the level of large models; the underlying capital battles are equally fierce.