Fusing appearance and distribution information of interest points for action recognition

作者:

Highlights:

摘要

Most of the existing action recognition methods represent actions as bags of space-time interest points. Specifically, space-time interest points are detected from the video and described using appearance-based descriptors. Each descriptor is then classified as a video-word and a histogram of these video-words is used for recognition. These methods therefore rely solely on the discriminative power of individual local space-time descriptors, whilst ignoring the potentially useful information about the global spatio-temporal distribution of interest points. In this paper we propose a novel action representation method which differs significantly from the existing interest point based representation in that only the global distribution information of interest points is exploited. In particular, holistic features from clouds of interest points accumulated over multiple temporal scales are extracted. Since the proposed spatio-temporal distribution representation contains different but complementary information to the conventional Bag of Words representation, we formulate a feature fusion method based on Multiple Kernel Learning. Experiments using the KTH and WEIZMANN datasets demonstrate that our approach outperforms most existing methods, in particular under occlusion and changes in view angle, clothing, and carrying condition.

论文关键词:Action recognition,Clouds of Points,Feature fusion,Interest points detection,Multiple Kernel Learning

论文评审过程:Received 23 February 2010, Revised 26 July 2011, Accepted 12 August 2011, Available online 6 September 2011.

论文官网地址:https://doi.org/10.1016/j.patcog.2011.08.014