03/30 2026
415

「The Era of Pseudo-Intelligence Ends, the Era of True Intelligence Begins」
Author | Zhen Yao Editor | Li Guozheng Produced by | Bangning Studio (gbngzs)
Recently, the buzz around large models in vehicles has resurfaced.
On March 26, FAW Hongqi and Alibaba Cloud jointly announced that the Qianwen intelligent agent would be integrated into Hongqi’s intelligent cockpit, debuting in the new Hongqi HS6 PHEV.
Meanwhile, that same evening, the IM LS8 launched pre-sales. A few days prior, the automaker had announced itself as the world’s first to mass-produce and deploy the Qianwen large model in vehicles—its super intelligent agent, IM Ultra Agent, being one of the LS8’s core selling points.
Other large models are also on the brink of vehicle deployment.
Dongfeng Motor quietly revealed that its self-developed Taiji large model had passed the National Internet Information Office’s generative AI service registration, securing regulatory compliance. This means the AI system, tailored exclusively for the automotive industry, is one step closer to mass production and vehicle integration.
“The first half of the automotive industry’s transformation was an energy revolution; the second half will be an intelligence revolution, or more precisely, an AI-driven automotive revolution,” said Xiang Jiao, CTO of IM Motors. With intelligent driving foundations maturing and large models exploding in capability, the era of the “super intelligent agent” in vehicles has arrived.
Huang Rui from Dongfeng Motor’s R&D Institute AI Lab stated, “Just as humans rely on eyes, ears, and touch for judgment and decision-making, vehicles need multi-sensory coordination and deep cognitive processing.” He emphasized that the industry’s current focus is on enabling vehicles to truly understand the world, interpret user needs, and evolve autonomously.
Behind this surge of large models in vehicles lie numerous questions: Is this intelligentization (intelligentization) carnival (frenzy) a true revolution for in-car AI, or just another industry gimmick driving involution? Are these highly touted large model agents a genuine necessity for vehicles, or merely flashy but impractical features?
▍01 Ending the Era of Pseudo-Intelligence
Over the past five years, in-car voice assistants have been a standard selling point for automakers.
Nearly every new car’s promotional materials tout phrases like “99% voice wake-up rate,” “supports continuous dialogue,” and “multi-round interaction,” as if shouting “Xiao X, turn on the AC” encapsulates the entirety of intelligence.
But the truth is, most in-car voice assistants are pseudo-intelligent scams.
Traditional in-car voice systems rely on fixed command libraries, essentially matching keywords—you must use precise phrasing for it to execute set actions.

For example, saying “I’m cold” might only turn on the AC; asking for “a restaurant with parking” lists venues without filtering for parking availability; stating “Leave for work at 8 AM tomorrow and grab coffee” splits into two disjointed commands, failing to plan a coherent itinerary.
Even more awkwardly, this command-based “weak intelligence” struggles to cover basic scenarios. While navigating, adjusting AC temperature causes stuttering; accents lead to recognition failures; vague requests prompt “Sorry, I didn’t understand.”
Automakers aren’t unaware of this, but under pressure to compete in intelligence, they’ve doubled down on involution. The industry races to improve wake-up speed, interaction rounds, and recognition accuracy—yet admits little: voice assistants without large model support are glorified “artificial idiots,” incapable of addressing real user pain points.
“Past voice interactions were fundamentally command-based—users issued orders, vehicles executed,” said Xiang Jiao. “This failed to unlock AI’s true value.”
Only with the advent of large model agents did this facade shatter.
Xiang argues that what consumers truly need is for vehicles to understand intent and proactively handle tasks. This means shifting from voice-controlled operations to natural conversations, enabling vehicles to comprehend users and fulfill needs across all scenarios—what they call “omnidirectional delegation.”

Unlike traditional voice assistants, large model agents evolve from passive execution to proactive service, interpreting needs and completing tasks without rigid commands.
For instance, saying “Plan a business trip tomorrow, avoid morning rush hour, and book a hotel and airport transfer” automatically triggers navigation, Fliggy, and Gaode to handle routing and reservations seamlessly.
Users gain a dedicated travel assistant rather than operating a car system, transforming cockpit intelligence competition from parameter stacking to experience-driven innovation.
This marks the in-car AI final—the end of pseudo-intelligence, the dawn of true intelligence.
▍02 Competing on Delivery
By 2026, deploying large models in vehicles is no longer a gimmick but a survival imperative.
Yet many misunderstand the finals’ rules: the contest isn’t about model parameters or technical sophistication but delivering tangible, user-dependent experiences.
The trajectory of this track (race) is clear and unforgiving.
In 2023, ChatGPT ignited generative AI hype overseas, with Baidu’s ERNIE Bot fueling China’s first large model vehicle frenzy.

Nearly 10 automakers, including Changan, Jiyue, LanTu, Hongqi, and Great Wall, swiftly announced partnerships. Back then, large models were tickets to intelligentization (intelligentization) transformation—presence mattered more than power.
By 2025, DeepSeek’s rise sparked a second AI vehicle wave. Within months, nearly 20 automakers, including Geely, Chery, BYD, Great Wall, and Leapmotor, integrated the technology, vying for momentum in intelligentization (intelligentization)’s second half.

In reality, many automakers spent the past two years stacking parameters—touting trillion-parameter models and multimodal interactions. Most of these technologies remained demos, unable to scale or solve user pain points.
Post-purchase, users discovered these “large model agents” were just repackaged voice assistants, incapable of proactive service.
As 2026 unfolds, hype fades, and finals’ rules shift.
The industry no longer competes on parameters or modalities but on delivering user-perceptible, reliable experiences.
Alibaba’s Qianwen, Huawei’s HarmonyOS Cockpit 5.0, XPENG’s Tianji AIOS 6.0, NIO’s Banyan, and Dongfeng’s Taiji large model—whether supplied by tech giants or self-developed by automakers—have adopted distinct strategies.
Alibaba’s Qianwen eschews model hype, focusing instead on a cloud-based Agent + full-ecosystem service loop.

Its strength lies in dissecting complex user instructions, enabling cross-app tasks like navigation, dining reservations, ticketing, and itinerary planning in a single phrase. Leveraging Alibaba’s mature ecosystem, it ensures complete service chains, cost control, and engineering feasibility.
Unlike rivals, Qianwen avoids deep hardware-ADAS integration, concentrating instead on intent interpretation, decision-making, and execution. This lightweight, efficient model offers automakers a reliable path to rapid intelligent agent mass production.
The IM LS8 and Hongqi HS6 PHEV serve as its deployment vehicles, transitioning from voice commands to omnidirectional delegation.
As Xiang Jiao puts it: “Whether commuting daily or traveling with kids, a single phrase or command lets it perceive your intent, understand the context, and execute a series of actions.”
▍03 Who Will Prevail?
Huawei forged a separate full-stack collaboration (synergistic) path. Its HarmonyOS Smart Cockpit (HarmonySpace 5), labeled “seamless integration of human, vehicle, and home,” sets industry benchmarks for smoothness.
Built on MoLA hybrid large model architecture, Huawei pursues a hardware-system-ecosystem full-stack collaboration (synergy) approach, excelling in multimodal perception, four-zone voice recognition, continuous dialogue, and fuzzy intent understanding. It enables seamless transitions between phones, cockpits, and smart homes.

Huawei’s edge lies in HarmonyOS’s natural ecosystem barriers and ultra-smoothness, essentially transplanting mature mobile internet experiences into vehicles. High standardization and rapid automaker adaptation are its core market levers.
Now, consider Dongfeng Motor’s Taiji large model.
“With Taiji, vehicles become more human-like—not just understanding speech but judging surroundings and offering proactive services,” said Sun Kuan, an AI R&D engineer at Dongfeng’s AI Development Center.
He explained that traditional intelligent driving separates perception, prediction, and control, leading to algorithmic delays and weak reasoning. Taiji, however, achieves “end-to-end” integration, enabling human-like instant thinking and millisecond-level responsiveness.

As the finals intensify, three trends are reshaping the in-car large model landscape:
First, voice assistants face marginalization.
Traditional voice systems without large model support will gradually exit the market.
Second, tech firms and automakers deepen partnerships.
Large model R&D demands massive investment, making solo automaker efforts inefficient and costly; solo tech firm deployments lack vehicle architecture support for true cockpit-ADAS fusion.
Future competition will hinge on ecosystem alliances between automakers and tech giants like Alibaba, Tencent, and Huawei. Alibaba’s Qianwen partnerships with IM and Hongqi exemplify this trend.
Third, intelligence competition returns to user essence.
Victory in the finals depends on translating large model agent capabilities into indispensable daily driving experiences that genuinely resolve pain points—a ultimate return to track (race) value.
For automakers, does integrating in-car large models guarantee success?
In reality, the finals have just begun.
Partnering with Alibaba’s Qianwen, Huawei’s HarmonyOS Cockpit, or Tencent’s Yuanbao merely grants entry. The true test lies in subsequent fusion and optimization. For Qianwen, the key is aligning its capabilities with automakers’ brand positioning and user needs to craft differentiated intelligent experiences.
After all, the goal of intelligence is never flashiness but practicality.
(Featured image generated by AI)
