03/30 2026
500
Autonomous driving is the central battleground for automotive intelligence, and LiDAR is the most critical differentiator in this intelligent competition.
The existence or abandonment of LiDAR is one of the biggest disagreements in current smart car technology routes. The two major camps are diametrically opposed, leaving many consumers torn when choosing a car.
One camp, represented by Tesla and XPeng, is the pure vision faction, firmly believing that LiDAR is a false proposition. They argue that LiDAR's role in actual driving is very limited, yet costly, and that high-resolution cameras + AI are sufficient to serve as the car's eyes.

The other camp, represented by Waymo, Huawei, NIO, and Li Auto, adheres to a multi-sensor fusion approach, where LiDAR and cameras work in tandem. This is because, in pure vision solutions, the camera's recognition success rate drops sharply in adverse weather conditions such as backlighting, darkness, heavy rain, and dense fog. Therefore, LiDAR is an essential hardware foundation for high-level autonomous driving at L3 and above.

Manufacturers each have their own arguments, leaving consumers even more confused when buying a car: Should they choose a car equipped with LiDAR? How many units? Is the extra tens of thousands of yuan worth it?
Is LiDAR truly a safety redundancy or just a 'fool's tax'? This question plagues more and more ordinary car owners and has become an unavoidable topic in the entire automotive industry.
Let's take a look back at the technological development history of LiDAR to gain a clearer understanding of this technology. Perhaps this will help us better understand what we truly need.
The History of LiDAR Development: The Evolution of Automotive Vision
Bats are born without vision but can freely navigate and avoid all obstacles in the dark by emitting ultrasonic waves and judging distance and direction based on echoes. Similarly, cars lack eyes and cannot emit sound waves, so how do they recognize the distance and objects in their external environment? The light from LiDAR is equivalent to the ultrasonic waves of bats.
In 1960, the world's first practical ruby laser was successfully developed. By emitting laser pulses and calculating the precise distance between the device and obstacles after the laser bounces back, a three-dimensional outline of the surrounding environment can be gradually reconstructed.

This powerful ranging and perception capability was initially only used in scientific research laboratories and major national projects. For example, NASA used LiDAR in the Apollo 15 lunar landing mission to complete topographic detection and three-dimensional mapping of the lunar surface, allowing humanity to see the lunar landscape for the first time.
With the miniaturization of microelectronics, LiDAR finally became the eyes of civilian devices. Its evolutionary journey has been about continuously refining these electronic eyes to make them more perfect.
LiDAR first found its way into the industrial sector. Starting in the 1980s, manufacturers in manufacturing countries such as Germany and Japan began using single-line 2D LiDAR for industrial automation scenarios. This first-generation civilian LiDAR could only operate in a single line and produce planar images, collecting distance data within a single plane. However, it could already play a role in areas such as autonomous navigation and obstacle avoidance for transport vehicles and monitoring of warehousing and logistics corridors.
At the beginning of the 21st century, an event completely changed the fate of LiDAR—the DARPA Grand Challenge for autonomous vehicles.

To enable autonomous vehicles to perceive complex 3D environments in real-time and accurately, participating teams urgently needed a powerful sensor. In 2005, the Stanford team first used Velodyne's multi-beam mechanical rotating LiDAR and achieved fame. By 2007, five of the finishing teams used LiDAR, marking the beginning of the golden age of LiDAR in the automotive field.
Why is binding LiDAR so beneficial for autonomous driving? Early autonomous driving relied on cameras and ordinary sensors, as well as millimeter-wave radars. The shortcomings in perception capabilities made it difficult for autonomous vehicles to make proactive and high-quality intelligent decisions. LiDAR compensate s (makes up for) several shortcomings:
1. Seeing farther gives the car enough reaction time. LiDAR can emit beams over 200 meters, with later upgrades exceeding 500 meters. This means that when driving at high speeds, the car can detect damaged taillights on trucks, obstacles on the road, etc., well in advance, allowing for early warnings and braking. Through LiDAR's long-distance perception capabilities, more time is leave for (left) for braking and reaction.
2. Seeing clearly makes up for small obstacles that drivers/cameras cannot see. LiDAR's ranging accuracy can reach the centimeter level, and the generated 3D point cloud can not only see pedestrians and vehicles but also low-lying obstacles such as fallen nails and potholes in the road. "Ghost probes" that the human eye cannot see and uneven surfaces that cameras cannot capture can be detected by LiDAR, unaffected (unaffected) by light conditions. Even in harsh weather such as pitch-black nights, heavy fog, and heavy rain, it can effectively compensate for the vision problems that the human eye and cameras cannot see or see clearly.

3. Reacting quickly allows the car to make decisions faster. Cameras output 2D images, and the autonomous driving system has to rely on algorithms to guess the distance of objects, which can lead to judgment errors and delays in image parsing and data processing. LiDAR, on the other hand, directly outputs distance and 3D data, allowing the car to quickly judge the movements of pedestrians and vehicles and make more responsive decisions to slow down or avoid obstacles.
It is precisely because of these overwhelming technological advantages that LiDAR achieved fame in autonomous driving competitions. However, at this stage, LiDAR was still exclusive to test vehicles of Silicon Valley players like Waymo, with unit prices reaching tens of thousands of dollars. Moreover, its large size—mechanical rotating LiDAR placed on the roof of a car was a conspicuous bulge that detracted from the vehicle's appearance—made it difficult to popularize among car owners.
The mass production and vehicle integration of LiDAR began after 2016. With the wave of automotive intelligence, these "eyes" of the car quickly iterated, further shrinking in size to be more perfectly embedded into the vehicle body. The laser wavelength and detection distance also increased, moving toward full coverage. Coupled with falling costs, the number of automakers adopting LiDAR and the variety of integration solutions have grown, including Waymo's multi-sensor redundancy solution, Huawei's ADS 4.0 full-coverage perception architecture, and XPeng's LiDAR + vision dual-insurance solution.
However, as LiDAR's presence in the automotive industry grows, an arms race in hardware has also brought unprecedented confusion to consumers.
Quantity as Justice? New Problems Arising from the Hardware Arms Race
From the initial 1 unit to 2, 3, or even 4 units... Some industry organizations predict that by 2032, the number of LiDAR units per vehicle could reach 6, including 2 long-range radars and 4 short-range radars. In the intelligent transformation of cars, the most intuitive (intuitive) competition is the number of LiDAR units installed.
As the number of installations increases, consumers' confusion grows, along with intensifying controversies. Summarizing consumers' confusion about LiDAR, the main concerns are:
Controversy 1: Are "eyes" important for daily driving?
Many people only drive in urban areas for daily commutes, without driving on highways or at night, and under simple road conditions. Spending tens of thousands of yuan more on a version with LiDAR seems like a waste on unused features. However, not spending this money leaves them uneasy, fearing that their car's safety may not be guaranteed in case of an unexpected situation.
Controversy 2: Does having "eyes" mean the car has good vision?
Even among cars equipped with LiDAR, the perception capabilities vary greatly between different models. Some cars with two units may not perform as well as others with just one. This is due to differences in the quality of individual LiDAR units. High-line-count, high-frequency, high-angular-resolution high-performance LiDAR is equivalent to eyes with 5.0 vision, while multiple low-performance entry-level radars are like two pairs of weak eyes that still cannot see clearly on the road.

Controversy 3: Do more "eyes" mean safer driving?
Even with multiple high-performance radars, driving accidents may not be avoided, just as a person with good hearing and vision may still trip and fall. For example, a certain brand's all-weather radar model was involved in an accident on a winter afternoon, which the official report attributed to sensor recognition limitations under extreme lighting conditions. What is even more disheartening for consumers is that the daily utilization rate of LiDAR in many automakers' vehicles is less than 5%, meaning that this expensive sensor is mostly idle most of the time, and installing more units is merely for show.
Controversy 4: With vision but no brain, is the car still safe?
LiDAR is just the eyes of the car, but if the autonomous driving system's "brain" is not up to par, the algorithm cannot accurately judge and proactively make decisions, or if supporting infrastructure such as onboard communication and computing chips is insufficient, then even 5.0 vision cannot guarantee safety. For example, Tesla adheres to a pure vision route because its FSD algorithm is strong enough to support intelligent driving capabilities even with reduced hardware. Therefore, without the backing of fusion algorithms and system-level integration, installing more high-performance LiDAR units will not be effective.
Theoretically, installing LiDAR can bring a better intelligent driving experience, but the hardware quantity in promotional materials does not necessarily equal the actual driving experience. High-end configurations that do not perform well and excessive hardware without corresponding improvements are real issues, pushing the competition for LiDAR to a deeper level—from whether to have it and how many units to have, to more fundamental aspects.
Safety Vision: The Invisible Arena of Automotive Intelligence Competition
Returning to the essence of technology, the sole and ultimate indicator of a multi-LiDAR solution is the car's safety vision. Every LiDAR unit should contribute to safety and not be mere decorations.
So, what truly determines the upper limit of a smart car's safety vision? The answer is collaborative capabilities.
Just as a person avoids risks on the road by not only seeing dangers in time but also immediately judging how to avoid them and then controlling their body to dodge, truly ensuring safety requires a coordinated effort from the entire body. For a car, sensors such as LiDAR, cameras, and millimeter-wave radars must work together while transmitting signals to the "brain."
Take the most frightening "ghost probe" scenario, where a pedestrian suddenly darts out from behind a large vehicle on the roadside. The camera may be blocked by the large vehicle and fail to see the pedestrian, and if the LiDAR detects the pedestrian but fails to transmit the signal to the system in time, the reaction will be delayed. Therefore, the collaboration between LiDAR and multi-sensors is directly related to safety.
However, achieving multi-sensor collaboration is challenging, requiring precise time and space (spatial-temporal) synchronization, data association, and deep data fusion, which test algorithm performance and computing power configurations. Models with poor fusion may experience LiDAR idling while relying solely on other sensors, making the expensive configurations useless in extreme scenarios.

Currently, leading practices in the industry, such as Huawei's ADS 4.0, combine radar point clouds and visual images for identification through a BEV + Occupancy scheme, achieving higher accuracy than pure vision. XPeng and NIO's proprietary onboard computing platforms enable all sensors to report simultaneously, reducing data conflicts.
Clearly, achieving collaborative perception with LiDAR cannot be accomplished simply through procurement and bulk installation on vehicles. It requires long-term R&D and accumulation by automakers in computing hardware, clusters, AI training platforms, underlying architectures, algorithms, and ecosystems. Only on a powerful digital foundation can the hardware value of LiDAR be fully unlocked, enabling full-coverage perception and precise decision-making, which are the core of high-level intelligent driving. High-level intelligent driving, in turn, further enhances the role of LiDAR.
From the perspective of LiDAR alone, we may have already glimpsed that the competition in automotive intelligence has entered the deep water zone, exhibiting a Matthew effect where the strong get stronger.
Useless Material Stacking: Where Is the Future of LiDAR?
If LiDAR does not equate to intelligent upgrades, why do major automakers continue to promote it enthusiastically? This reflects an awkward situation in the current automotive market.
Intelligence-related parameters have become important references for users when choosing a car. As an essential hardware component for a car's perception system, if automakers do not adopt LiDAR, they may be perceived as lacking intelligence and at a disadvantage in market promotion and parameter comparisons. Therefore, even if they know they lack fusion capabilities, cannot collaboratively schedule sensors, and the installed LiDAR may remain idle, they dare not avoid material stacking.
On the consumer side, ordinary consumers have limited technical understanding of intelligent driving. Software algorithms are invisible and intangible, and unless they personally experience negative incidents (bad cases), they are unlikely to have a deep understanding. In contrast, LiDAR data is real-time and intuitive, allowing ordinary people to easily judge its accuracy and sensitivity. A higher quantity of LiDAR units gives the impression of technological leadership and fully equipped configurations.

So, returning to the question that consumers care about most: Is it worth spending money on a LiDAR version?
From a technological perspective, LiDAR is definitely not a "fool's tax" but a vision guarantee for autonomous driving systems. However, to truly get your money's worth from LiDAR, you cannot just buy LiDAR alone. You also need full-coverage perception "eyes," a collaboratively working sensor system, a responsive "brain," sufficient computing power, and a reliable network... Only when all these elements come together can LiDAR truly be worth the investment.
Looking back at the development history of LiDAR, from being an unattainable national treasure to gradually finding its way onto the roofs of civilian vehicles and finally becoming a core safety accessory within reach of ordinary consumers, we can see how automotive vision has evolved.
Ultimately, what consumers truly want is to achieve the greatest intelligence and safety at the most reasonable cost. What determines whether a car is smart, safe, and intelligent enough are the hidden, intangible capabilities beneath the hardware—the true focus of the automotive intelligence competition.