This study introduces the Perception Latency Mitigation Network (PLM-Net), a deep learning framework designed to address perception latency in vision-based lateral control of Autonomous Vehicles (AVs). Perception latency, defined as the delay between sensing the environment through vision sensors (e.g., cameras) and executing a control action (e.g., steering), can degrade steering stability and lead to inaccurate lateral control. This issue remains understudied in both classical and neural-network-based control methods. Rather than reducing latency itself, PLM-Net mitigates its effect on control performance. The framework comprises two models: the Base Model (BM), representing the existing lane-keeping assist controller, and the Timed Action Prediction Model (TAPM), which forecasts future steering actions under different latency conditions. By interpolating the outputs of BM and TAPM according to the real-time latency value, PLM-Net adaptively compensates both constant and time-varying delays without modifying the original control pipeline. Experimental evaluations in a closed-loop simulation show that PLM-Net improves steering accuracy and trajectory stability under multiple latency settings, achieving up to 62% and 78% reductions in Mean Absolute Error (MAE) for constant and variable-latency cases, respectively.
@article{khalil2024plm,
title={PLM-Net: Perception Latency Mitigation Network for Vision-Based Lateral Control of Autonomous Vehicles},
author={Khalil, Aws and Kwon, Jaerock},
journal={arXiv preprint arXiv:2407.16740},
year={2024}
}