PoseConvGRU: A Monocular Approach for Visual Ego-motion Estimation by Learning
作者:
Highlights:
• We propose a novel framework named PoseConvGRU, a monocular approach of visual ego-motion estimation, which is data-driven and fully trainable.
• We design a series of data augmentation methods for avoiding the overfitting problem and improve the performance of the model when proceeding through challengeable scenarios to the greatest extent such as high-speed or reverse situation.
• We augment the training data by randomly skipping frames to simulate the velocity variation which results in a better performance in turning and high-velocity situations.
• Our method shows a competitive performance to state-of-the-art monocular geometric and learning methods, encouraging further research of learning-based methods.
摘要
•We propose a novel framework named PoseConvGRU, a monocular approach of visual ego-motion estimation, which is data-driven and fully trainable.•We design a series of data augmentation methods for avoiding the overfitting problem and improve the performance of the model when proceeding through challengeable scenarios to the greatest extent such as high-speed or reverse situation.•We augment the training data by randomly skipping frames to simulate the velocity variation which results in a better performance in turning and high-velocity situations.•Our method shows a competitive performance to state-of-the-art monocular geometric and learning methods, encouraging further research of learning-based methods.
论文关键词:Ego-motion,Pose estimation,Deep learning,Recurrent Convolutional Neural Networks,Data augmentation
论文评审过程:Received 2 June 2019, Revised 15 October 2019, Accepted 24 December 2019, Available online 21 January 2020, Version of Record 7 February 2020.
论文官网地址:https://doi.org/10.1016/j.patcog.2019.107187