Robust visual tracking with structured sparse representation appearance model
作者:
Highlights:
•
摘要
In this paper, we present a structured sparse representation appearance model for tracking an object in a video system. The mechanism behind our method is to model the appearance of an object as a sparse linear combination of structured union of subspaces in a basis library, which consists of a learned Eigen template set and a partitioned occlusion template set. We address this structured sparse representation framework that preferably matches the practical visual tracking problem by taking the contiguous spatial distribution of occlusion into account. To achieve a sparse solution and reduce the computational cost, Block Orthogonal Matching Pursuit (BOMP) is adopted to solve the structured sparse representation problem. Furthermore, aiming to update the Eigen templates over time, the incremental Principal Component Analysis (PCA) based learning scheme is applied to adapt the varying appearance of the target online. Then we build a probabilistic observation model based on the approximation error between the recovered image and the observed sample. Finally, this observation model is integrated with a stochastic affine motion model to form a particle filter framework for visual tracking. Experiments on some publicly available benchmark video sequences demonstrate the advantages of the proposed algorithm over other state-of-the-art approaches.
论文关键词:Appearance model,Block-sparsity,Orthogonal matching pursuit,Sparse representation,Visual tracking
论文评审过程:Received 4 May 2011, Revised 12 September 2011, Accepted 9 December 2011, Available online 23 December 2011.
论文官网地址:https://doi.org/10.1016/j.patcog.2011.12.004