DeepGuard: a framework for safeguarding autonomous driving systems from inconsistent behaviour

作者:Manzoor Hussain, Nazakat Ali, Jang-Eui Hong

摘要

The deep neural networks (DNNs)-based autonomous driving systems (ADSs) are expected to reduce road accidents and improve safety in the transportation domain as it removes the factor of human error from driving tasks. The DNN-based ADS sometimes may exhibit erroneous or unexpected behaviours due to unexpected driving conditions which may cause accidents. Therefore, safety assurance is vital to the ADS. However, DNN-based ADS is a highly complex system that puts forward a strong demand for robustness, more specifically, the ability to predict unexpected driving conditions to prevent potential inconsistent behaviour. It is not possible to generalize the DNN model’s performance for all driving conditions. Therefore, the driving conditions that were not considered during the training of the ADS may lead to unpredictable consequences for the safety of autonomous vehicles. This study proposes an autoencoder and time series analysis–based anomaly detection system to prevent the safety-critical inconsistent behaviour of autonomous vehicles at runtime. Our approach called DeepGuard consists of two components. The first component- the inconsistent behaviour predictor, is based on an autoencoder and time series analysis to reconstruct the driving scenarios. Based on reconstruction error (e) and threshold (θ), it determines the normal and unexpected driving scenarios and predicts potential inconsistent behaviour. The second component provides on-the-fly safety guards, that is, it automatically activates healing strategies to prevent inconsistencies in the behaviour. We evaluated the performance of DeepGuard in predicting the injected anomalous driving scenarios using already available open-sourced DNN-based ADSs in the Udacity simulator. Our simulation results show that the best variant of DeepGuard can predict up to 93% on the CHAUFFEUR ADS, 83% on DAVE-2 ADS, and 80% of inconsistent behaviour on the EPOCH ADS model, outperforming SELFORACLE and DeepRoad. Overall, DeepGuard can prevent up to 89% of all predicted inconsistent behaviours of ADS by executing predefined safety guards.

论文关键词:Autonomous driving systems, Anomaly detection, Deep learning, Safety guards, DNN

论文评审过程:

论文官网地址:https://doi.org/10.1007/s10515-021-00310-0