Self-taught learning of a deep invariant representation for visual tracking via temporal slowness principle

作者:

Highlights:

• Temporal slowness principle is exploited for learning tracking representation.

• Learned invariant representation is decomposed into amplitude and phase features.

• Higher-level features are learned by stacking autoencoders convolutionally.

• A novel observational model to counter drift and collect relevant samples online.

• Tracking experiments show our method is superior to state-of-the-art trackers.

摘要

Highlights•Temporal slowness principle is exploited for learning tracking representation.•Learned invariant representation is decomposed into amplitude and phase features.•Higher-level features are learned by stacking autoencoders convolutionally.•A novel observational model to counter drift and collect relevant samples online.•Tracking experiments show our method is superior to state-of-the-art trackers.

论文关键词:Visual tracking,Temporal slowness,Deep learning,Self-taught learning,Invariant representation

论文评审过程:Received 20 June 2014, Revised 9 January 2015, Accepted 17 February 2015, Available online 26 February 2015, Version of Record 17 June 2015.

论文官网地址:https://doi.org/10.1016/j.patcog.2015.02.012