04/08 2026
375
A few days ago, a fleet of Luobo Kuaipao self-driving vehicles in Wuhan came to a sudden halt on elevated roads and major thoroughfares, creating a surreal scene of motionless cars in the middle of traffic. Thankfully, no safety incidents occurred, and all passengers disembarked without incident.
However, the incident triggered a wave of criticism: “Early starter, late finisher,” “Autonomous driving safety is questionable,” “Is this the best Baidu can do?”
To be fair, criticizing emerging technology is a low-risk, high-reward move online—it drives engagement and comes with the moral high ground of “consumer advocacy.”
But this time, the story may be more nuanced. On the technical front, autonomous driving safety protocols may not be at fault. Instead, the root cause likely lies in Luobo Kuaipao’s “sensitive skin” safety strategy—a design philosophy that prioritizes extreme caution over operational fluidity.
It’s Not About Capability, But Safety Philosophy
Let’s start with a fundamental question: How should a Level 4 autonomous vehicle behave when it encounters a malfunction?
The focus on Level 4 is intentional. Unlike Level 3 and below, where responsibility shifts between driver and manufacturer, Level 4 systems—where no human driver is required—place full accountability on the operator (in this case, Baidu).
To put this in context, consider today’s Level 2+ assisted driving systems (e.g., Tesla Autopilot, XPeng NGP). These systems explicitly state: “When engaged, the manufacturer assumes liability; if the driver fails to intervene when prompted, they bear responsibility.” In practice, this means drivers remain legally responsible for accidents, as manufacturers often design systems to disengage abruptly when faced with unfamiliar scenarios.
For true autonomous vehicles (Level 4+), however, policy frameworks like China’s Implementation Guidelines for the Pilot Program of Access and Road Use of Intelligent Connected Vehicles assign liability to the manufacturer. This shifts the safety burden entirely onto the company’s shoulders.
Given this, large-scale deployments must prioritize systemic safety. A single vehicle’s decision to pull over could trigger secondary accidents, especially in dense urban environments. Manufacturers, therefore, rely on two layers of control: cloud-based oversight and vehicle-end execution.
In this incident, the cloud issued the stop command. Critics argue: “Why not pull over? Wouldn’t that be safer?”
Not necessarily.
After a cloud-issued halt, pulling over requires independent vehicle decision-making. But without a safety officer onboard, who ensures the maneuver doesn’t endanger passengers or other road users? Some speculate Luobo’s vehicles lack the technical redundancy to pull over safely. Let’s examine its hardware:
This setup is more than capable of autonomous parking. Yet Baidu’s architecture restricts vehicles to data-uploading roles, with final decisions reserved for the cloud. In this context, an emergency stop was the safest option—not a technological failure, but a deliberate design choice.
A Global Pattern: Over-Cautiousness Leads to Gridlock
This isn’t unique to China. In December 2025, a citywide power outage in San Francisco stranded hundreds of Waymo vehicles at intersections. While Waymo’s systems could treat non-functioning traffic lights as four-way stops, their conservative strategy required remote human confirmation before proceeding. The surge in requests overwhelmed operators, leaving vehicles immobilized.

Waymo vehicles stranded in San Francisco after a power outage.
Sound familiar? Like Luobo Kuaipao, Waymo’s remote dependency became a bottleneck under extreme conditions, causing systemic paralysis.
Contrast this with Tesla’s approach. From June 2025 onward, Tesla’s Robotaxi in Austin recorded 14 collisions—4–8x higher than human drivers. Tesla’s pure vision + end-to-end neural network delegates full autonomy to vehicles, prioritizing agility over caution. The problem isn’t that Tesla’s cars “won’t move” but that they “move too boldly.”
Three companies, three strategies: Tesla empowers individual vehicles (high accident rate); Waymo balances autonomy with remote oversight (collapses under scale); Luobo Kuaipao centralizes control (vulnerable to systemic shocks).
The Trade-Off: Safety vs. Fluidity
Luobo Kuaipao’s strategy falls into the “overly cautious” camp, akin to a “sensitive skin” that reacts to the slightest stimulus by freezing. While this led to an embarrassing public spectacle, it’s worth noting:
Yet from a passenger’s perspective, a safe stop is preferable to a risky maneuver. For Baidu, this approach minimizes liability risks—a rational, if imperfect, choice.
Why the Backlash?
Autonomous driving represents AI’s real-world frontier. When large language models make errors, society tolerates it as “growing pains.” But when self-driving cars prioritize safety over convenience, they’re labeled “incompetent.”
Criticizing Luobo Kuaipao for “inadequate technology” is easy but misses the point. Companies like Baidu, Pony.ai, and WeRide are pioneering a path where even minor missteps trigger panic. Yet they’ve chosen the hardest route: not just making vehicles move, but stopping them safely.
The lesson? Autonomous driving’s greatest challenge isn’t innovation—it’s managing the tension between safety and societal expectation. For now, Luobo Kuaipao’s “sensitive skin” may be its greatest strength, even if it doesn’t look pretty.