Automotive radar and camera fusion using Generative Adversarial Networks
作者:
Highlights:
•
摘要
Radar sensors are considered to be very robust under harsh weather and poor lighting conditions. Largely owing to this reputation, they have found broad application in driver assistance and highly automated driving systems. However, radar sensors have considerably lower precision than cameras. Low sensor precision causes ambiguity in the human interpretation of the measurement data and makes the data labeling process difficult and expensive. On the other hand, without a large amount of high-quality labeled training data, it is difficult, if not impossible, to ensure that the supervised machine learning models can predict, classify, or otherwise analyze the phenomenon of interest with the required accuracy. This paper presents a method for fusing the radar sensor measurements with the camera images. A proposed fully-unsupervised machine learning algorithm converts the radar sensor data to artificial, camera-like, environmental images. Through such data fusion, the algorithm produces more consistent, accurate, and useful information than that provided solely by the radar or the camera. The essential point of the work is the proposal of a novel Conditional Multi-Generator Generative Adversarial Network (CMGGAN) that, being conditioned on the radar sensor measurements, can produce visually appealing images that qualitatively and quantitatively contain all environment features detected by the radar sensor.
论文关键词:
论文评审过程:Received 27 July 2018, Revised 3 April 2019, Accepted 7 April 2019, Available online 24 April 2019, Version of Record 28 May 2019.
论文官网地址:https://doi.org/10.1016/j.cviu.2019.04.002