When a human driver hits someone, it's called an "accident," but when a self-driving car hits someone, it's called a "disaster"? How can autonomous driving find a way out of the "double standard" dile

02/26 2026 459

Introduction

On January 23, 2026, near an elementary school in Santa Monica, California, a Google Waymo self-driving taxi collided with a child who suddenly darted out from behind a parked SUV during the morning rush hour, causing minor injuries to the latter.

Preliminary investigations revealed that the vehicle had reduced its speed from 17 mph (27.3 km/h) to below 6 mph (9.6 km/h) before the collision. Waymo claimed that if a human driver had been at the wheel, the impact speed might have reached 14 mph.

This incident acts as a prism, refracting the sharpest contradictions of the autonomous driving era: the public demands far more from machine drivers than it tolerates from human ones.

As of February 10, 2026, the California Department of Motor Vehicles database had recorded 15 collision reports involving self-driving taxis; in 2025, the total for the year was nearly 140.

Meanwhile, over 40,000 people die annually in the U.S. from traffic accidents caused by human drivers.

Let's discuss this with everyone at "Self-Driving Cars Are Here" (WeChat Official Account: Autonomous vehicles have arrived )!

(For further reading, click: Tesla's First Cybercab Rolls Off the Line, But Musk's "Self-Driving Taxi" Myth Crumbles: Only 42 Vehicles Deployed in 8 Months, 19% Availability, 9x Accident Rate—Yet Still Promoting 2 Million Units?)

I. The "Double Standard" Scene: The Silence of Statistics and the Outcry of Moral Judgment

The autonomous driving industry is trapped in a paradoxical game of "proving you're safer than me."

Waymo's Safety Claims vs. Public Reaction: Waymo declares on its website that compared to human drivers over the same mileage, its self-driving vehicles reduce serious injury or fatal accidents by 90% and overall injury accidents by 81%.

However, when specific accidents occur, these macro statistics pale in the face of public emotion.

University of San Francisco professor William Riggs cuts to the chase: "People don't see this as a statistical probability issue—they see it as a moral and emotional one."

The Transparency Dilemma in Accident Handling:

Former NHTSA senior safety advisor Missy Cummings publicly called on Waymo to release full accident footage, but the company refused, citing "provision to regulators."

This information asymmetry exacerbates public distrust.

University of South Carolina law professor Bryant Walker Smith pointed out: "We should stop asking whether people trust this technology and start asking whether the companies behind it are trustworthy."

The Imbalance of Comparison:

Human drivers cause countless accidents daily, most deemed "accidents" or "negligence";

every self-driving accident is scrutinized individually, becoming evidence of "technological unreliability."

Craig Melrose of global mobility services provider HTEC bluntly stated: "People tolerate human errors but expect robots to be nearly perfect."

II. The Roots of "Double Standards": A Trio of Fear, Cognition, and Interests

This double standard is no accident; it is rooted in the deep soil of human psychology, social structure, and conflicting interests.

1. The Social Extension of the "Uncanny Valley" Effect

The public feels instinctive discomfort and alertness toward entities close to but not quite human (e.g., robots).

When such entities control high-speed steel vehicles, fear multiplies geometrically.

Every accident, no matter how minor, triggers confirmation bias: "I knew this thing was unsafe."

2. The Cognitive Dilemma of Responsibility Attribution

Human driver accidents have clear liability—personal negligence, drunk driving, speeding, etc.

But the chain of responsibility in self-driving accidents is far more complex: algorithm flaws? Sensor failures? Map data errors? Or overly extreme scenarios?

This ambiguity unsettles the public, leading them to demand "absolute safety" guarantees—a standard humans have never achieved.

3. The Explicitation of Interest Conflicts

The California Truck Drivers Union cited the Waymo accident to call for suspending its operations, stating, "Robot taxis threaten workers' jobs."

This reveals the deep contradiction in autonomous driving adoption: the clash between technological progress and employment security. Every accident becomes ammunition for opponents to strengthen their stance.

4. The Amplifying Effect of Media

Headlines like "Robot Injures Child" spread far more than "Human Driver Causes 1,000th Accident."

Media overreporting of rare events distorts public risk perception.

Carnegie Mellon professor Philip Koopman noted: "Every accident is treated as an isolated incident rather than judged against safety data accumulated over millions of miles."

III. The Cost of "Double Standards": Innovators' Prisoner's Dilemma and the Industry's Trust Deficit

This unfair scrutiny is exacting a heavy toll on the entire industry.

1. Innovators' "Perfect Prisoner" Dilemma

Companies must walk a tightrope between "aggressive expansion" and "absolute safety."

Waymo plans to expand its services to over a dozen new cities in 2026, including Dallas, Houston, Detroit, and Nashville, and has raised $16 billion in new funding.

Tesla also plans to launch self-driving taxi services in multiple U.S. cities in 2026.

But every accident could slam the brakes on these ambitious plans—just as Uber's fatal accident shut down its project in 2018, and Cruise's 2023 accident halted operations.

2. Regulatory Lag and Contradictions

On February 11, 2026, the U.S. House Energy and Commerce Committee narrowly passed the Autonomous Vehicles Act by a vote of 12–11, advancing it to full House consideration.

(For further reading, click: U.S. Autonomous Driving Bill "Breaks Through": Is It Tesla's "Timely Rain" or an Industry "Double-Edged Sword"? Strategic Anxiety Behind the U.S.-China Race?)

But regulatory frameworks lag far behind technological iteration.

Waymo faces another investigation: its taxis repeatedly passed stopped school buses in Austin school zones despite district requests to halt operations during class hours.

3. The Fragility of Public Trust

According to an October 2025 Marist poll, 56% of Americans said they were unlikely or very unlikely to try self-driving taxis.

Trust takes years to build but can collapse in a single accident.

Waymo co-CEO Tekedra Mawakana admitted: "I think society can accept (robot-caused deaths)," but only if companies maintain extremely high transparency about their safety records—a current industry weakness.

4. The Sustainability Challenge of Business Models

If society demands self-driving vehicles be orders of magnitude safer than human driving before acceptance, technological commercialization faces insurmountable cost barriers.

Melrose warned: "People need to adjust their expectations because this technology learns to handle irrational and unpredictable human behavior on the roads."

IV. The Path to Breakthrough: From "Perfection Myth" to "Credible Evolution"

Breaking the "double standard" dilemma requires cognitive upgrades and collaborative evolution among technology, regulators, and the public.

1. The Industry's "Humility and Transparency" Revolution

Companies must stop reflexive defenses and instead show technology's true limitations.

One professor suggested: "If a company says, 'This is where the real difficulty lies. This is what we don't know. This is what we're working on,' I'd trust them more."

This means releasing more non-sensitive data, sharing edge-case handling logic, and establishing independent safety advisory committees.

2. The Regulatory "Agile and Wise" Transformation

Regulators need dynamic evaluation systems that neither overly restrict innovation nor ignore risks.

The UN's 2026 Draft Global Regulations for Autonomous Driving Systems provides a unified global framework, but countries must develop localized implementation rules to balance safety and development.

3. The "Risk Perception" Reshaping of Public Education

The concept of "relative safety" rather than "absolute safety" must be popularized.

As the Waymo accident shows, robots already minimize harm—this is the true value of technological progress.

Media should report more balancedly: cover accidents but also present data; report failures but also record successes.

4. The "Gradual Acceptance" of Social Contracts

The public may need to accept that for a long time, self-driving systems will make errors—just of different types and frequencies than humans.

The key is establishing a rapid learning, continuous improvement feedback loop so every accident nourishes system evolution.

The "double standard" predicament facing self-driving taxis is essentially the growing pains of human society welcoming disruptive technology.

We demand machine drivers reach within years the safety levels human drivers only approach after centuries of evolution and trillions of miles driven—a contradictory expectation in itself.

Yet this "harshness" also contains positive force: it compels the industry to hold itself to the highest standards, accelerating safety technology maturation. Pioneer companies like Waymo, Tesla, and Zoox are iterating more reliable systems under this "unforgiving" scrutiny.

In conclusion, "Self-Driving Cars Are Here" (WeChat Official Account: Autonomous vehicles have arrived ) believes:

The ultimate challenge for autonomous driving has never been perception accuracy or decision speed, but human hearts.

If the industry continues to self-justify with "We're 10 times safer than humans," it will only deepen the divide;

only by setting aside technological arrogance, humbly listening to fears, addressing doubts, and sharing risks can the public truly willing sit in a driverless car.

After all, the road to the autonomous driving era is not lit by LiDAR but paved by trust.

What do you think? #SelfDrivingCarsAreHere #AutonomousDriving #SelfDriving #DriverlessCars

Solemnly declare: the copyright of this article belongs to the original author. The reprinted article is only for the purpose of spreading more information. If the author's information is marked incorrectly, please contact us immediately to modify or delete it. Thank you.