Semantic video scene segmentation and transfer
作者:
Highlights:
•
摘要
In this paper we present a new approach to semantically segment a scene based on video activity and to transfer the semantic categories to other, different scenarios. In the proposed approach, a user annotates a few scenes by labeling each area with a functional category such as background, entry/exit, walking path, interest point. For each area, we calculate features derived from object tracks computed in real-time on hours of video. The characteristics of each functional area learned in the labeled training sequences are then used to classify regions in different scenarios. We demonstrate the proposed approach on several hours of three different indoor scenes, where we achieve state-of-the-art classification results.
论文关键词:
论文评审过程:Received 5 June 2013, Accepted 14 February 2014, Available online 25 February 2014.
论文官网地址:https://doi.org/10.1016/j.cviu.2014.02.008