Tracking by switching state space models
作者:
Highlights:
•
摘要
We propose a novel tracking method that allows to switch between different state representations as, e.g., image coordinates in different views or image and ground plane coordinates. During the tracking process, our method adaptively switches between these representations. We demonstrate the applicability of our method for dynamic cameras tracking dynamic objects: Using the image based representation (non-smooth trajectories if the camera is shaking) together with the ground plane based one (estimation uncertainty in visual odometry or ground plane orientation), the disadvantages of both representation forms can be overcome: Non-occluded observations on the image plane provide strong appearance cues for the target. Smooth paths on the ground plane provide strong motion cues with the camera motion factored out. Following a Bayesian tracking approach, we propose a probabilistic framework that determines the most appropriate state space model (SSM)—image or ground plane or both—at each time instance. Experimental results demonstrate that our method outperforms the state-of-the-art.
论文关键词:
论文评审过程:Received 14 July 2015, Revised 27 February 2016, Accepted 5 March 2016, Available online 21 November 2016, Version of Record 21 November 2016.
论文官网地址:https://doi.org/10.1016/j.cviu.2016.03.006