Who Will Be the Primary 'Eyes' of Robots: LiDAR or Cameras

08/14 2025 534

Produced by Zhineng Zhixin

In the robotics industry, LiDAR is transitioning from a supporting role to a leading one. Historically, it was viewed as an optional auxiliary sensor to enhance positioning accuracy or conduct environmental scanning. Today, it has emerged as the standard sensing solution for the majority of mobile robots, often referred to as the 'eyes of robots'.

This trend stands in stark contrast to the ongoing debate in the automotive industry regarding the sensing route for autonomous vehicles, which remains divided between 'pure vision' and 'multi-sensor fusion'. In contrast, the robotics field has almost universally adopted LiDAR. Whether it's lawn mowers, robotic dogs, or industrial AGVs, manufacturers are increasingly integrating LiDAR as the core sensor in their next-generation products.

The reasons for this shift are practical: LiDAR can provide high-precision, full 3D spatial information across various lighting conditions, weather, and complex terrains, without relying on extensive training data. It remains effective regardless of cloudy days, strong light, or low light. For robots that require continuous long-term operation and have stringent safety requirements, this reliability is crucial.

LiDAR is quietly reshaping the design logic of robots and laying the groundwork for the next wave of intelligence.

Part 1

Technological Evolution of Robot Sensing and the Role of LiDAR

To achieve autonomous mobility, robots must address localization, navigation, and obstacle avoidance simultaneously.

Early consumer-grade robots were limited by sensor capabilities, relying mainly on low-precision route planning methods.

◎ The first generation of lawn mowing robots defined their working areas by burying electromagnetic wires in the yard and used collision detection to alter paths. This method offered little environmental understanding and could only perform repetitive actions in a single scenario.

◎ The second-generation system incorporated RTK and cameras, where RTK relied on GPS signals for positioning, combined with visual recognition to achieve a certain level of autonomous path planning.

However, this solution was prone to signal loss in obstructed environments like forests and urban areas, requiring additional signal poles to improve accuracy, thus increasing deployment costs and complexity.

◎ The third-generation solution introduced 360° scanning LiDAR, enabling robots to autonomously map (SLAM) in unknown environments.

LiDAR acquires 3D spatial data of the surroundings through high-speed scanning and measurement, recording spatial geometric relationships in the form of point clouds, thereby achieving precise obstacle perception and position estimation.

Unlike solutions reliant on visual model training, LiDAR does not require large-scale scene data training and is insensitive to lighting changes, enabling stable operation in complex environments with strong light, night, and shadows. This characteristic is particularly valuable in scenarios like lawn mowing, outdoor inspection, and low-light industrial plants.

In terms of hardware, the mini ultra-hemispherical 3D LiDAR significantly enhances robots' perception coverage in multi-directional movements by expanding the vertical field of view (FOV) to a hemispherical range.

This is especially crucial for robots with flexible path changes and dense surrounding dynamic targets, such as humanoid and quadruped robots, which need to continuously acquire environmental information during movements with multiple degrees of freedom like walking and turning to avoid collision risks from blind spots.

In contrast, automotive LiDAR focuses more on long-distance detection to support anticipation during high-speed travel, but its narrower vertical FOV is less suitable for the omnidirectional perception needs of robots in complex close-range environments.

Under adverse weather conditions like rain, modern LiDAR combined with waveform-level signal processing algorithms can eliminate transient reflection signals from raindrops, snowflakes, etc., and extract real obstacle information from point cloud data. This ability ensures robots' stability in outdoor all-weather operations and improves the accuracy and consistency of SLAM map construction.

Part 2

Application Differentiation and Industrialization Path

The diversification of robot types has led to significant differences in their technical demands for LiDAR.

Humanoid and quadruped robots are similar in terms of movement speed, scene switching frequency, and obstacle distribution, allowing them to adopt the same hemispherical or panoramic LiDAR solutions. The primary differences lie in installation position and quantity.

For instance, a smaller quadruped robot can cover the main field of view by installing a single LiDAR on its head, while an industrial-grade quadruped robot used for search and rescue, exploration, etc., may require a dual-radar system for wider coverage and redundant perception.

Due to human body structure occlusion, humanoid robots need to be equipped with multiple radars at positions like the chest and back to ensure continuous omnidirectional environmental modeling.

◎ In the yard robot field, lawn mowers, pool cleaning robots, snow removal robots, and other devices can obtain continuous 3D environmental perception through panoramic LiDAR, enabling autonomous operation without manual intervention.

The market for such devices is substantial, especially in regions with mature overseas yard economies, where they offer strong price competitiveness and user acceptance. The integration of LiDAR in these devices is gradually replacing traditional solutions requiring manual deployment of markers or signal sources, thus reducing deployment costs and enhancing user experience.

◎ Industrial robots such as unmanned forklifts and automated guided vehicles (AGVs) operate in complex indoor and outdoor mixed scenarios and need to balance high positioning accuracy and environmental adaptability.

Compared to sensors like vision and ultrasound, LiDAR can more reliably identify shelf edges, aisle widths, and dynamic obstacles, combining with inertial measurement units (IMU) and odometer data to achieve centimeter-level path planning. This has become the mainstream solution in scenarios like logistics and warehousing, port handling, etc., which have extremely high requirements for operation efficiency and safety.

◎ The rapid expansion of the service robot market has also fueled the growth of LiDAR demand. Applications such as hotel room service, restaurant guidance, and mall inspection require robots to navigate safely and efficiently among dynamic crowds, making them more reliant on real-time perception capabilities with a large field of view compared to industrial robots.

Hemispherical radars ensure that robots can obtain complete spatial information within a close range regardless of orientation changes, combining horizontal 360° and vertical ultra-wide fields of view, which is crucial for avoiding contact with customers or moving obstacles.

Regarding the industrialization path, platform-based product design has emerged as a key strategy to reduce costs and expand application coverage.

By standardizing hardware interfaces, data output formats, and driver protocols, a single LiDAR model can be adapted to multiple robot platforms, reducing production and R&D costs associated with customization for different terminal devices.

Simultaneously, stable production capacity expansion ensures large-scale shipping capabilities, enabling suppliers to maintain delivery stability and price competitiveness as market demand rapidly grows.

Currently, LiDAR shipments for robots have reached the hundreds of thousands of units, with hemispherical products widely used in multiple subfields, gradually forming a dominant market position.

Summary

The rapid adoption of LiDAR in the robotics field is both a testament to technological maturity and a manifestation of accelerated market competition. With cost reductions and advancements in localization, it is transitioning from a 'high-end option' to a 'basic standard'.

More robots will possess enhanced perception and autonomy, meaning industry competition will shift from hardware performance comparisons to algorithm optimization, system integration, and deep application scenario adaptation. In the coming years, omnidirectional perception will become the cornerstone of robot design, with LiDAR potentially being the most stable underlying support in this race towards autonomy and intelligence.

Solemnly declare: the copyright of this article belongs to the original author. The reprinted article is only for the purpose of spreading more information. If the author's information is marked incorrectly, please contact us immediately to modify or delete it. Thank you.