Scene categorization via contextual visual words
作者:
Highlights:
•
摘要
In this paper, we propose a novel scene categorization method based on contextual visual words. In the proposed method, we extend the traditional ‘bags of visual words’ model by introducing contextual information from the coarser scale and neighborhood regions to the local region of interest based on unsupervised learning. The introduced contextual information provides useful information or cue about the region of interest, which can reduce the ambiguity when employing visual words to represent the local regions. The improved visual words representation of the scene image is capable of enhancing the categorization performance. The proposed method is evaluated over three scene classification datasets, with 8, 13 and 15 scene categories, respectively, using 10-fold cross-validation. The experimental results show that the proposed method achieves 90.30%, 87.63% and 85.16% recognition success for Dataset 1, 2 and 3, respectively, which significantly outperforms the methods based on the visual words that only represent the local information in the statistical manner. We also compared the proposed method with three representative scene categorization methods. The result confirms the superiority of the proposed method.
论文关键词:Scene categorization,Contextual visual words,Context based vision,Pattern recognition
论文评审过程:Received 28 April 2009, Revised 26 September 2009, Accepted 11 November 2009, Available online 6 December 2009.
论文官网地址:https://doi.org/10.1016/j.patcog.2009.11.009