12/03 2025
466
Recently, a friend posed a question: Imagine a scenario in intelligent driving—a foggy night where the imaging capabilities of LiDAR and cameras are compromised. Can we still rely on 4D mmWave radar data feedback to sustain intelligent driving? Is integrating sensors with different operational principles the best approach in such a situation? Indeed, this scenario is both typical and highly challenging for autonomous driving in real-world settings. So, how will 4D mmWave radar fare in such conditions?

Why does 4D mmWave radar excel in foggy night environments?
In conditions marked by 'fog + night + poor lighting + low visibility,' cameras, which rely on light, tend to produce blurred images with poor contrast and missing details. LiDAR, particularly certain types, can also suffer from scattering, reflection, and absorption caused by fog, rain, snow, water droplets, snowflakes, and dust, leading to degraded point cloud quality. Consequently, traditional perception solutions that primarily rely on vision (cameras) and LiDAR do not perform optimally. In contrast, mmWave radar, with its longer wavelength, offers stronger penetration through rain, fog, dust, and water droplets, and is less affected by lighting and visibility conditions. Notably, 4D mmWave radar can still effectively detect surrounding objects even in low-visibility scenarios like fog and night, making it a 'all-weather/all-time' perception sensor.

Principle of mmWave radar system. Image source: Internet
Compared to traditional mmWave radar, 4D mmWave radar offers substantial functional enhancements. It can measure not only distance, speed, and azimuth but also elevation (i.e., height/pitch angle) information. This means its detection capability extends beyond rough perception within a plane to include stereo (3D + height) perception, better resolution, and certain imaging capabilities.
Therefore, in low-visibility scenarios like foggy nights, relying solely on a combination of cameras and LiDAR makes reliable perception challenging. Incorporating 4D mmWave radar can effectively compensate for the shortcomings of these sensors, extending the autonomous driving system's perception capabilities to detect, track, continuously monitor, and precisely measure the speed and distance of various targets, forming crucial complementary capabilities.

Why hasn't 4D mmWave radar become mainstream?
While 4D mmWave radar can indeed enhance the perception capabilities of autonomous driving, why do more people think of cameras or LiDAR when it comes to autonomous driving perception hardware? Why hasn't 4D mmWave radar, which has been proposed for many years, seen widespread application or become the primary perception hardware?
1) Sparse point clouds & limited resolution/details
Even 4D mmWave radar falls short in terms of point cloud quantity/density/resolution compared to high-beam, high-resolution LiDAR. In complex scenarios like multi-lane merging, closely spaced small vehicles, and obstacles with complex shapes (railings, curbs, traffic cones, pedestrians, small electric vehicles...), 4D mmWave radar may only detect a single 'point' or a few reflection points, making it difficult to accurately determine the object's shape, boundaries, size, and category (is it a car, person, railing, or tree?). This information alone is insufficient to meet planning/decision-making requirements.
2) Weak height/shape/category judgment
Although 4D mmWave radar can obtain elevation information to improve height resolution, its recognition and classification of certain scenarios (such as pedestrians, cyclists, children, small animals, low obstacles, partially occluded objects...) are not as clear or semantically rich as LiDAR + cameras. Especially for stationary but complexly shaped obstacles (e.g., partially shadowed/obstructed by the roadside), 4D mmWave radar can detect distance/speed/angle information but cannot clearly determine the category and specific boundaries.
3) Point cloud 'noise + sparsity + uncertainty' issues
Radar wave reflections may generate noise due to ground, humidity, water droplets, fog droplets, building reflections, ground pavement, metallic structures of other vehicles, rain, snow, and environmental stray wave interference. Moreover, 4D mmWave radar point clouds are inherently sparser than LiDAR. When encountering complex/static/low-reflection targets (such as black objects, slight fog droplets in the air, transparent objects...), there may be no echo or the echo may be too weak. Such uncertainties make relying solely on 4D mmWave radar for road condition judgment unreliable.
4) Limited redundancy/fault tolerance of a single sensor
Autonomous driving demands extremely high safety and reliability. If 4D mmWave radar is the sole perception source, once its beam is obstructed (e.g., by large obstacles ahead, complex structural reflections, terrain undulations, adjacent metallic structures), or experiences abnormal reflections (wet/slippery ground, water/snow/fog/rain/hail/mud/dust), or encounters multiple vehicles/dense targets, even with echoes, misjudgments, missed detections, and positioning errors may occur. For L4/L5 autonomous driving, such uncertainties pose too great a risk, making 4D mmWave radar alone insufficient.

Is perception redundancy the best approach?
No single perception sensor is universally effective. To truly achieve safety and reliability in various environments (sunny/rainy/foggy/night/tunnels/mixed lighting/complex traffic), the optimal solution is a multi-sensor combination + fusion. This is why many current autonomous/intelligent driving systems are equipped with cameras, LiDAR, and mmWave radar (especially 4D radar) simultaneously.
In some designs, the system even evaluates the credibility of different sensor results and performs mode switching/fusion judgment. When vision or LiDAR is impaired, mmWave radar takes the lead; when weather is favorable, lighting is optimal, and visibility is clear, cameras + LiDAR provide high-precision recognition and details. Fusing data from all sensors can yield a more robust, redundant, and highly credible environmental model.
For the perception-challenging environment of foggy nights mentioned by our friend at the beginning, integrating sensors with different operational principles and performing fusion is indeed the most realistic, reliable, and optimal solution currently available.

Is fusion the ultimate answer?
Although multi-sensor fusion schemes currently provide indispensable perception guarantees for autonomous driving, we must still acknowledge that fusion is a necessary but not sufficient means to enhance system reliability. Fusion does not merely stack data from multiple perception hardware but aims to achieve a 1+1>2 effect; otherwise, numerous issues will arise.
1) High computing power + data fusion complexity
Effectively fusing data from different sensors requires not only powerful computing capabilities but also highly mature fusion algorithms, involving numerous critical steps. These include multi-source data temporal-spatial synchronization, sensor calibration, coordinate system unification, data alignment, and deep fusion of heterogeneous information (such as point clouds, images, radar echoes), as well as credibility assessment and anomaly handling of data from various sources. For autonomous driving systems requiring real-time responses, this undoubtedly imposes very high demands on computing power and software design.
2) Post-fusion calibration/synchronization/redundancy management
To make multi-sensor fusion effective, the spatial relationships (position/attitude/calibration/alignment), temporal synchronization (different sampling frequencies/delays/latency compensation), and data fusion strategies (weighting/priority/confidence/redundancy switching) among sensors must be rigorously designed, along with long-term testing and maintenance.
3) Edge scenarios still persist
After multi-sensor fusion, perception accuracy can indeed be increased, but edge scenarios such as heavy fog, rain, strong crosswinds, water mist, splashes, reflections, complex terrain, metallic structures, mixed multiple targets, strong reflections, occlusions, and small objects (e.g., fallen objects, tire fragments, plastic bags, pedestrian minor movements) will still exist. Even with multi-sensor fusion, perception blind spots, false detections, missed detections, or delays may occur. In some cases, perception hardware may be interfered with, leading to perception failure and inability to meet autonomous driving system requirements. This is also why achieving L3-level and above autonomous driving remains challenging at this stage.
4) Cost/industrialization/mass production issues
Equipping multiple perception sensors will inevitably increase costs, system complexity, power consumption, and maintenance requirements. For automakers pursuing mass production and commercialization, this imposes additional demands on vehicle cost control, long-term reliability, after-sales maintenance systems, and even product lifecycle management. Therefore, many automakers now offer different perception solutions for different market segments to meet the needs of more consumers.
Thus, even though 'multi-sensor + fusion' is currently the optimal solution, it requires continuous optimization, validation, and refinement through time, technology, industrial chains, and engineering practices.

Final Thoughts
In foggy nights, 4D mmWave radar should be regarded as the foundation of perception, capable of maintaining stable perception of distance, speed, and approximate height when optical sensors fail. However, it cannot provide sufficient semantic and boundary precision to support advanced decision-making. Therefore, the correct approach is to treat it as a redundant yet essential component, complementing LiDAR and cameras. Simultaneously, at the software level, confidence-based modal weighting and degradation strategies, real-time health monitoring and automatic re-calibration mechanisms, as well as clear ODD (Operational Design Domain) and fault degradation procedures, must be established. Only by fully addressing hardware complementarity, fusion algorithms, real-time confidence management, and operational-level safety boundaries can autonomous driving systems achieve both 'visibility' and 'stability' in extreme scenarios like foggy nights.
-- END --