Enhancing semantic image retrieval with limited labeled examples via deep learning

作者:

Highlights:

摘要

With the rapid growth of the Internet, a large number of multi-modal objects such as images and their social tags can easily be downloaded from the Web. The use of such objects can improve training process in the presence of few or limited number of labeled images provided. In order to leverage these unlabeled and labeled multi-modal Web objects for enhancing the performance of unimodal image retrieval, we propose a novel approach called Semi-supervised Multi-concept Retrieval to semantic image retrieval via Deep Learning (SMRDL) in this paper. Differing from conventional methods that use multiple and independent concepts in a semantic multi-concept query, our proposed approach regards multiple concepts as a holistic scene for multi-concept scene learning of unimodal retrieval. In particular, we first train a multi-modal Convolutional Neural Network (CNN) as a concept classifier for images and texts, and then use it to annotate unlabeled Web images. For each of unlabeled images, we then obtain its most relevant concept annotations by using a new strategy of annotation promotion. Finally, we employ a unimodal visual CNN to train a concept classifier in visual modality, which uses both unlabeled and labeled examples for concept learning of unimodal retrieval. The results of our comprehensive experiments on two datasets of MIR Flickr 2011 and NUS-WIDE have shown that our proposed approach outperforms several state-of-the-art methods.

论文关键词:Semantic image retrieval,Semi-supervised learning,Convolutional neural networks,Concept-based image retrieval

论文评审过程:Received 2 May 2018, Revised 20 July 2018, Accepted 25 August 2018, Available online 28 August 2018, Version of Record 21 November 2018.

论文官网地址:https://doi.org/10.1016/j.knosys.2018.08.032