Video Classification via Weakly Supervised Sequence Modeling
作者:
Highlights:
•
摘要
Traditional approaches for video classification treat the entire video clip as one data instance. They extract visual features from video frames which are then quantized (e.g., K-means) and pooled (e.g., average pooling) to produce a single feature vector. Such holistic representations of videos are further used as inputs of a classifier. Despite of efficiency, global and aggregate feature representation unavoidably brings in redundant and noisy information from background and unrelated video frames that sometimes overwhelms targeted visual patterns. Besides, temporal correlations between consecutive video frames are also ignored in both training and testing, which may be the key indicator of an action or event. To this end, we propose Weakly Supervised Sequence Modeling (WSSM), a novel framework that combines multiple-instance learning (MIL) and Conditional Random Field (CRF) model seamlessly. Our model takes each entire video as a bag and one video segment as an instance. In our framework, the salient local patterns for different video categories are explored by MIL, and intrinsic temporal dependencies between instances are explicitly exploited using the powerful chain CRF model. In the training stage, we design a novel conditional likelihood formulation which only requires annotation on videos. Such likelihood can be maximized using an alternating optimization method. The training algorithm is guaranteed to converge and is very efficient. In the testing stage, videos are classified by the learned CRF model. The proposed WSSM algorithm outperforms other MIL-based approaches in both accuracy and efficiency on synthetic data and realistic videos for gesture and action classification.
论文关键词:
论文评审过程:Received 12 December 2014, Revised 21 September 2015, Accepted 21 October 2015, Available online 10 November 2015, Version of Record 19 October 2016.
论文官网地址:https://doi.org/10.1016/j.cviu.2015.10.012