How Should Autonomous Vehicles Navigate Pothole-Laden Roads?

10/10 2025 524

To truly attain Level 5 autonomy, autonomous vehicles need to do more than just maneuver through urban streets; they must also adeptly handle roads riddled with potholes or situated in remote areas. For human drivers, spotting a pothole triggers an immediate lane change. However, for autonomous vehicles to match the capabilities of human drivers, they require extensive technical support.

Before delving into today's discussion, let's explore why autonomous vehicles must factor in pothole navigation during their design phase. Small potholes, if not avoided promptly, can cause uncomfortable jolts for passengers or drivers. Larger potholes, on the other hand, pose a risk of wheel damage. Hence, effectively navigating roads with potholes is crucial for achieving Level 5 autonomy.

For human drivers, potholes are just one of the many common driving conditions that are quickly identified and responded to. In contrast, autonomous vehicles rely on a multi-sensor, multi-layer approach to detect "potholes." A prevalent method integrates cameras, LiDAR, millimeter-wave radar with inertial measurement units (IMUs), wheel speed, and acceleration sensors for comprehensive perception and inference. Cameras excel at capturing surface details and textures, enabling semantic segmentation (treating potholes as "road anomalies") or contour recognition through deep learning. LiDAR generates point clouds for 3D road reconstruction, detecting depressions or height variations via surface fitting. Radar operates reliably in adverse weather conditions (rain, snow, fog), aiding in the identification of shallow potholes or protrusions. IMUs and onboard accelerometers serve as "post-event sensors," detecting vertical impacts from wheel-pothole interactions through acceleration signals and enabling "jolt/pothole event" detection via pattern recognition. Recent technical proposals have integrated visual, LiDAR, and IMU methods (e.g., the Vision+IMU's VIDAR approach) for pothole detection, yielding promising outcomes.

Navigating roads with potholes doesn't always necessitate immediate avoidance; flexibility is paramount. Autonomous vehicles depend on perception and decision-making modules for these operations. The perception module provides critical information: the pothole's relative position (lateral and longitudinal), size/depth/severity estimates, and its relationship to the vehicle's current speed and path (e.g., distance, arrival time). This information may originate from sensors (e.g., front cameras or long-range LiDAR detecting road morphology in advance) or "contact-based" signals (e.g., acceleration, vibration, wheel speed changes upon pothole impact). To make informed choices between avoidable and unavoidable scenarios, autonomous systems map pothole severity to risk levels. Minor bumps may be disregarded or mitigated with slight speed reductions; moderate depressions trigger deceleration and safe lane positioning; severe potholes or those posing vehicle damage risks prompt lane changes (if safe) or controlled passage with event reporting to the cloud or driver. The core principle is "risk-cost balancing," as avoidance maneuvers carry risks (e.g., rear-end collisions from abrupt steering). Thus, the decision-making module evaluates both potential pothole damage and avoidance-related traffic risks.

Breaking down the process: perception precedes intermediate prediction and assessment, culminating in trajectory generation and control. The perception layer outputs probabilistic pothole candidates (position, depth probability, confidence), while localization and synchronization modules align pothole geographic coordinates with vehicle odometry and timing (crucial for accuracy, preventing "misplaced potholes"). The risk assessor then estimates the consequences of inaction and the costs of various maneuvers using vehicle dynamics models (incorporating speed, steering angle, braking capacity, axle loads, etc.). For instance, it calculates whether tire impact forces during pothole traversal exceed thresholds or if deceleration affects safe following distances. After assessment, the trajectory planner generates smooth, safe alternative paths, which may involve limited lateral shifts, continuous deceleration, or, rarely, "brief stops" for manual or remote assistance. Finally, the controller translates paths into steering, throttle, and braking commands while monitoring suspension and vehicle responses, adjusting torque distribution or suspension damping (if equipped with semi-active or active systems) to mitigate impacts. This end-to-end process must complete decision-making and execution within milliseconds to seconds, ensuring vehicle protection without creating greater safety hazards.

Regarding perception algorithms, pothole detection can be categorized into "direct observation" and "indirect inference." Direct observation relies on cameras and LiDAR for road geometry reconstruction, applying traditional image processing and deep learning (e.g., semantic segmentation, instance segmentation) to identify pothole edges or fitting point clouds to detect abnormal height differences. Indirect inference is akin to "observing others' reactions," inferring road anomalies by monitoring preceding vehicles' displacement, acceleration changes, or headlight reflections. Combining both methods compensates for individual limitations; when vision is impaired (e.g., glare or nighttime), IMU/wheel speed signals still indicate potholes, while clear vision enables precise direct measurements. Notably, research on vibration and acceleration-based road classification has demonstrated that machine learning can categorize road conditions (smooth, bumpy, potholed, speed bumps, etc.) with high accuracy, aiding post-event pothole identification and mapping.

Scaling single-vehicle detection to fleet or city-wide levels introduces "mapping and collaboration." Autonomous systems can report pothole events to the cloud, creating crowdsourced road condition maps shared among vehicles. This enables advance warnings for subsequent vehicles and helps operators identify high-frequency pothole locations for road maintenance feedback. Studies also propose "vehicle-to-vehicle cooperative detection" and "road anomaly prediction via preceding vehicle motion," the latter requiring no high-definition road reconstruction but inferring irregularities through visual tracking of preceding vehicles' vibration and displacement patterns. This approach is particularly valuable for poorly paved, narrow, or obstructed urban roads.

Beyond technical aspects, handling roads with potholes involves numerous considerations. Balancing false positives and negatives is crucial; overly sensitive systems cause frequent deceleration or unnecessary lane changes, disrupting comfort and traffic flow, while overly conservative systems may fail to detect severe potholes. Thus, decisions must incorporate perception outputs, historical data, map data, sensor reliability, and current traffic conditions for robustness. The physical limits of speed and detection distance also matter; higher speeds demand earlier pothole identification and longer avoidance windows, prompting conservative strategies (e.g., preemptive deceleration) or reliance on high-resolution sensor arrays on highways. Sensor calibration and mechanical durability are concerns, as repeated pothole impacts may alter sensor positions or cause damage. Systems need online self-calibration or degraded operation strategies when sensor performance declines.

To better handle roads with potholes, innovative technical solutions abound. Some studies propose closed-loop "active suspension + perception" systems, adjusting suspension damping or body height upon pothole detection to absorb impacts and reduce vehicle sway, with commercial examples in semi-active/active suspension-equipped production vehicles. Another direction utilizes high-resolution radar or synthetic aperture radar for road reconstruction, albeit at higher costs and computational complexity. Other research frames pothole detection as "anomaly detection" or "temporal event detection," employing deep learning models for end-to-end discrimination of multi-sensor time series, particularly effective in low-speed urban scenarios.

For autonomous fleets and urban road managers, resolving pothole issues requires more than just technological advancements; it demands coordination with operational systems, legal liability frameworks, and road maintenance budgets. Operators prioritize road repairs using high-frequency pothole reports, while city managers leverage fleet data for pavement health monitoring, optimizing maintenance resources. Passengers have limited tolerance for "frequent deceleration" or "pothole-induced jolts," necessitating a balance among comfort, efficiency, and safety in autonomous systems.

-- END --

Solemnly declare: the copyright of this article belongs to the original author. The reprinted article is only for the purpose of spreading more information. If the author's information is marked incorrectly, please contact us immediately to modify or delete it. Thank you.