Reassessing Full Self-Development: Time to Embrace Strategic Collaboration in Intelligent Driving

03/31 2025 465

At the 2025 China EV100 Forum, Yu Kai, founder and CEO of Horizon Robotics, ignited widespread debate within the intelligent driving industry. He emphasized that automakers' blind pursuit of full self-development is unwise. Instead, a more forward-looking strategy would involve focusing on developing unique, differentiated functions while outsourcing standardized tasks to third-party experts.

Full Self-Development: An Inefficient Conundrum

The rapid evolution of intelligent driving technology, progressing at an astonishing pace, poses significant challenges to automakers aiming for full self-development. From a capital and technical standpoint, the investment required for intelligent driving technologies is immense. For instance, a notable chip R&D project can take years and billions of dollars to complete—a financial burden not all automakers can shoulder.

Moreover, the inefficiency of full self-development cannot be overlooked. The rapid iteration speed of intelligent driving technology far outpaces traditional automotive R&D cycles. Full self-development demands not only a large R&D team but also constant adaptation to technological advancements. Yu Kai noted that the underlying logic of intelligent driving has shifted from "imitating humans" to "surpassing humans," necessitating a heavier focus on mathematical logic and virtual simulation data rather than merely accumulating user data. However, under the full self-development model, automakers' resources are spread thin across various links, hindering in-depth research in key areas and stunting technological progress.

Data and Computing Power: The Foundations of Intelligent Driving

The development of intelligent driving technology is heavily reliant on robust data support and computing power. Smart cars generate vast amounts of data during operation. According to Huawei's predictions, a single vehicle can generate nearly 10TB of data daily during autonomous driving R&D and about 2TB daily during commercial implementation. This data, encompassing vehicle status, road conditions, and driving behavior, provides a wealth of material for AI model training. Additionally, powerful computing power is crucial for processing this massive data and making swift decisions.

Large model training based on LLM (Large Language Model) and VLM (Visual Language Model) is pivotal for enhancing intelligent driving performance. LLM enables understanding of natural language instructions, offering a more personalized driving experience, while VLM processes visual information, enhancing the vehicle's environmental perception. Through training these large models, the intelligent driving system can better comprehend complex traffic scenarios and make more precise decisions.

AI in the Physical World: The Need for Systematic Solutions

As artificial intelligence technology continues to advance, its next significant development direction involves deep integration into the physical world, closely aligning with the real economy to achieve a profound understanding and comprehensive empowerment of real-world scenarios. Embodied intelligence, a key application in this process, faces three core challenges for large-scale promotion: safety, coordination, and economy.

Intelligent agent terminals have limited sensor coverage, resulting in blind spots that cannot be processed or understood. When multiple intelligent agents are used on a large scale, they must achieve interaction coordination and intention sharing through a network. Additionally, intelligent agents cannot indefinitely stack hardware, sensing devices, and computing power, eventually needing to be backend-ized like mobile phone terminals, with some functions hosted in the cloud. These systematic issues urgently require an overall breakthrough through the construction of an AI network for the real-time physical world.

Beyond the AI network, a large AI model with a deep understanding of the physical world is essential, essentially serving as the "cognitive hub" of the physical world. Unlike traditional AI models reliant on static data, this AI model integrates multimodal models (VLM) and large language models (LLM), possessing three core capabilities: multimodal understanding, spatiotemporal reasoning, and adaptive evolution. It can deeply comprehend and perceive the physical world, recognize and understand multimodal data such as videos, images, and documents, and support human-computer natural language dialogue interaction and logical reasoning.

The AI network for the physical world, built upon this large AI model, can provide comprehensive real-time environmental information, presenting all information within several kilometers around the intelligent agent, unaffected by meteorological conditions or obstructions. Simultaneously, AI infrastructure can predict or detect issues promptly and transmit information to each intelligent terminal through the AI network, thereby avoiding safety risks stemming from missing or untimely information processing.

Vehicle-Road-Cloud Integration: The Practice of Collaborative Networks

In the realm of intelligent driving, vehicle-road-cloud integration represents an exemplary practical application of the "unique research" concept. True vehicle-road-cloud integration is not a mere combination of independently developed vehicle, road, and cloud technologies but rather a deep fusion and collaborative operation.

In some cities' intelligent transportation pilot projects, professional teams focus on the overall architecture design and technology integration of vehicle-road-cloud integration. They deploy advanced sensors and communication equipment on roads to enable real-time traffic condition perception. Vehicles are equipped with corresponding intelligent terminals that can receive information from roads and the cloud, facilitating precise driving decisions. The cloud serves as the brain, integrating and analyzing data uploaded by vehicles and roads for comprehensive traffic scheduling and optimization.

This integrated model offers significant advantages over automakers developing vehicle-end, road-end, and cloud-end technologies independently. It not only reduces overall R&D costs, avoiding redundant construction and resource waste but also markedly enhances the safety and traffic efficiency of intelligent driving.

Industry Future: "20% Self-Development + 80% Outsourcing" Becomes a Trend

Yu Kai boldly predicts that the intelligent driving industry will gradually evolve into a stable pattern of "20% self-development + 80% outsourcing." The top 20% of automakers, with their robust capabilities, can opt for self-developing differentiated functions to create unique competitive advantages. Conversely, the remaining 80% of automakers are better suited to relying on third-party suppliers for standardized functions, achieving optimal resource allocation.

The formation of this pattern stems from inherent necessity. On one hand, the standardized nature of intelligent driving technology makes it challenging to differentiate and build brand value in these areas. On the other hand, the rapid technological evolution and high R&D costs render full self-development increasingly impractical. Through open cooperation models, Horizon Robotics has already assisted multiple automakers in achieving intelligent driving equity, and this trend is poised to accelerate further in the future.

Focusing on Collaboration for Automakers' Stable and Long-Term Development

Competition in the era of intelligent driving is no longer a mere speed race but a marathon testing stability and sustainable development capabilities. If automakers persist in full self-development, they may fall into a dilemma of resource waste and miss out on the benefits of rapid technological advancements. Rather than blindly pursuing comprehensiveness, it is wiser to concentrate on developing differentiated functions and confidently entrust standardized tasks to third-party suppliers. Only in this way can automakers accurately identify their core competitiveness amidst the wave of intelligent transformation, achieving high-quality and stable development.

Solemnly declare: the copyright of this article belongs to the original author. The reprinted article is only for the purpose of spreading more information. If the author's information is marked incorrectly, please contact us immediately to modify or delete it. Thank you.