Trillion-Dollar Robot Market: LiDAR Transforms from Optional to Essential Equipment

08/11 2025 481

Author | Mao Xinru

This year's World Robot Conference (WRC) was a spectacle of humanoid robots.

Compared to last year's exhibition, which featured only 27 humanoid robots, this year saw 50 full-machine enterprises participate, nearly doubling the number.

Not only were there more "people" on site, but there were also more technological showcases, ranging from boxing and synchronized dancing to running a convenience store—truly a diverse display.

Particularly eye-catching was Tiangong 2.0Pro, the sibling of the robot marathon champion, Tiangong Ultra. It navigated independently to the work area, planned new paths in real-time amidst dense crowds, and stably avoided obstacles.

Unlike last year's exhibits, which mostly "stayed put" on their booths, this year's robots exhibited a much stronger sense of life, with significant improvements in autonomy and environmental adaptability.

This advancement is not just due to stronger motor abilities but, more crucially, to the evolution of their visual perception capabilities, enabling them to see the world more clearly, understand their environments, and make more intelligent decisions.

However, allowing robots to truly "see" the complex and ever-changing world is no small feat.

Vision becomes a bottleneck, and robots need "new eyes".

Biologically speaking, humans acquire 70-80% of external information through vision. Similarly, robots heavily rely on visual perception to obtain vast amounts of information.

Visual perception grants robots three key capabilities: environmental understanding, decision support and action planning, and human-robot interaction.

Through vision, robots obtain rich environmental information and achieve spatial semantic understanding via visual input, ultimately completing the closed loop from "seeing - understanding - achieving".

Especially in unstructured scenarios of human-robot interaction, vision enables robots to comprehend human intentions, thereby significantly enhancing the naturalness and emotional identification of the interaction.

To better meet these needs, robot vision technology has evolved from two-dimensional to three-dimensional, from monocular to multi-ocular, and from passive to active perception.

In the industry, companies like Unitree Robotics, Zhiyuan Robotics, Xinghai Map, Xingdong Jiyuan, and Zhongqing Robotics have adopted multi-modal visual solutions.

Despite the current diversification of visual solutions, robots still face common issues when trying to "see clearly".

First, extreme environments such as strong light, shadows, rain, and fog can severely impact camera imaging, causing visual algorithms to fail.

Second, dynamic occlusions like fast-moving pedestrians and vehicles greatly test the real-time re-identification and re-localization capabilities of the robot's vision system.

Moreover, the issues of computing power, power consumption, and data need urgent resolution. For instance, high-precision 3D reconstruction and deep learning inference require powerful computing power from edge computing platforms, but this is constrained by battery life and cooling conditions. Training highly robust models necessitates large-scale, multi-scenario annotated data, and the cross-scenario migration ability of models also needs improvement.

While optical cameras excel in resolution and semantic understanding, they rely on passive imaging and are susceptible to environmental factors like lighting, rain, snow, and reflections. To compensate for these weaknesses, robots need a "second eye" - LiDAR.

LiDAR calculates distances by actively emitting laser pulses and measuring echo times, thereby directly generating precise three-dimensional point cloud data. It boasts three core advantages:

  • All-weather stability: Unaffected by lighting conditions, ensuring stable output of high-precision ranging.
  • Geometric accuracy: Generates millions of point clouds with a single scan, finely reconstructing contours and surfaces.
  • Long-distance detection: Some solid-state and mechanical radars can achieve large-scale early perception and prediction.

Relying on these advantages, LiDAR has become the optimal sensor for three-dimensional environmental perception and one of the most important "eyes" for robots.

It is now a standard sensor for high-reliability robot scenarios such as autonomous driving, logistics robots, lawn mowing robots, and AGV robots.

LiDAR makes robot movement "stable, accurate, and agile".

Robot movement demands positioning, navigation, and obstacle avoidance.

The traditional "GPS+camera" solution is not only restricted by the whitelist but also prone to failure in complex terrain, indoor and building occlusions, and night environments.

LiDAR, especially 3D LiDAR, surpasses traditional solutions in terms of perception accuracy, comprehensiveness, and navigation/positioning stability.

From a vertical perspective, 3D LiDAR outperforms 2D LiDAR in omnidirectional perception. The scanning range of 2D LiDAR is limited to a single plane and cannot detect obstacles outside the plane, such as ground holes or high-altitude objects. 3D LiDAR constructs a three-dimensional point cloud through multi-line scanning, effectively solving this limitation.

In summary, 3D LiDAR is the optimal sensor for robot movement, serving as a "hexagonal warrior" that integrates positioning/navigation, all-weather condition perception, complex terrain detection, complex spatial environment monitoring, and irregular obstacle recognition.

Taking the first consumer-level application - lawn mowing robots - as an example, 3D LiDAR significantly enhances the user experience.

The traditional "RTK+vision" solution often faces limitations due to misjudging boundaries in strong light, missing cuts after rain, or collisions with hidden obstacles. Hesai Technology's JT series ultra-hemispherical 3D LiDAR, specifically designed for small robots, effectively addresses these issues with nearly 360° blind-spot-free perception and a minimum detection distance of 0m.

The MOVA 600/1000 lawn mowing robots equipped with this LiDAR can accurately identify complex environments, achieve intelligent path planning, and operate around the clock. MOVA 600/1000 shipped over 100,000 units in half a year and topped the Amazon charts in Germany and France, indirectly verifying the maturity and user recognition of the solution.

This further propels smart lawn mowing robots into a virtuous cycle of "3D LiDAR enhancing product capabilities - driving market scale expansion - reducing 3D LiDAR costs - further increasing adoption rates".

Besides smart lawn mowing robots, LiDAR is also being applied in various mobile robots across multiple scenarios:

  • Quadruped robots: Products like Unitree Robotics, Magic Atom, and Vital Dynamics' robot dogs, equipped with LiDAR, enable them to complete tasks in various complex scenarios.
  • Humanoid robots: Products from companies like Zhiyuan Robotics, Zhongqing Robotics, and Xingdong Jiyuan, through radar+vision collaborative perception, enable more natural human-robot interaction and commercial use in unstructured spaces such as exhibition halls and shopping malls.
  • Service robots: Unmanned delivery vehicles from companies like Neolix, JD.com, and Jiushi, equipped with LiDAR, ensure safe delivery, precise positioning, and obstacle avoidance.
  • Mobile robot training: For instance, Hesai Technology and Qunhe Technology jointly launched a simulation training solution that uses LiDAR point cloud data to optimize robot navigation algorithms.

These diverse ecological layouts all point to the same trend: LiDAR is becoming a universal sensor base for robot "mobility".

Robot LiDAR: A Blue Ocean Larger than ADAS

In public perception, LiDAR has garnered widespread attention in recent years due to its crucial role in Intelligent Driving Assistance Systems (ADAS) and autonomous vehicles.

Shifting the focus to the robot sector, it becomes evident that this market far surpasses the automotive field in terms of scale, application diversity, and cost-effectiveness demands.

The entire robot market is undergoing accelerated mass production and scaling, and accordingly, 3D LiDAR is expected to have a significant incremental market.

According to public data estimates, by 2029, the number of robots equipped with LiDAR worldwide will reach 5 million, and the robot LiDAR market is projected to reach a scale of tens of billions.

In terms of scale and growth potential, the penetration rate of LiDAR in robots has surpassed that of ADAS. Compared to the long vehicle regulation certification cycles, complex decision-making processes, and stringent cost controls faced by automotive ADAS, the robot market, especially the consumer and commercial markets, is more open to new technologies, with faster product iterations.

Secondly, robots come in diverse forms and have broader application boundaries, which puts forth diverse requirements for the performance and physical form of LiDAR, thereby providing much richer innovation and customization space than automotive applications.

Vital Dynamics Vbot

Xingdong Jiyuan Q5

From a market segment perspective, smart lawn mowing robots rely on LiDAR for precise boundary recognition and blind-spot-free coverage; unmanned delivery vehicles need its high-precision positioning and dynamic obstacle avoidance capabilities to ensure the safety of the "last mile".

Quadruped/humanoid robots rely on its three-dimensional point clouds for spatial modeling in complex environments; household service robots use it to enhance the reliability of close human-robot interaction; while industrial inspections and digital twins require it to provide high-precision point cloud data to support modeling.

In these market opportunities, LiDAR companies can not only tailor products for robots but also "adapt to local conditions" by directly reusing automotive-grade LiDAR.

For example, Hesai Technology applies its automotive-grade solid-state LiDAR FTX to various intelligent robot platforms, empowering multiple intelligent scenarios with its wide field of view and compact size.

Simultaneously, LiDAR players will continue to drive down costs through chip-based, platform-based design, automated manufacturing, and economies of scale, making the prospects for cost reduction and scaling clearer.

The current industry consensus is that as the robot industry gradually matures, 3D LiDAR will accelerate its penetration into the robot market.

Furthermore, the deep integration of robot LiDAR with AI algorithms, simulation platforms, and digital twin systems will significantly enhance industrial efficiency and application innovation.

For each vendor, as a "perceptual infrastructure provider", their opportunities extend beyond selling individual devices. Instead, they can transform perceptual capabilities into a "platform capability", embedding it downward into hardware and supporting algorithms and system integration upward.

By then, robot LiDAR will not merely be a "hardware replacement item" but the interface of the AI+smart hardware ecosystem.

The evolution of robot visual perception essentially endows machines with the ability to understand the physical world.

With its active, precise, and all-weather three-dimensional perception characteristics, LiDAR has become a crucial component in compensating for the shortcomings of traditional vision and enabling reliable autonomy for robots.

As LiDAR transitions from "high-end optional equipment" to "core standard equipment", making three-dimensional perception a new instinct for robots, this blue ocean, much larger than ADAS, will usher in a new era of perception for intelligent robots.

And before the blue ocean turns red, some have already anchored their leadership positions.

Solemnly declare: the copyright of this article belongs to the original author. The reprinted article is only for the purpose of spreading more information. If the author's information is marked incorrectly, please contact us immediately to modify or delete it. Thank you.