Training Visual-Semantic Embedding Network for Boosting Automatic Image Annotation

作者:Weifeng Zhang, Hua Hu, Haiyang Hu

摘要

Image auto-annotation which annotates images according to their semantic contents has become a research focus in computer vision, as it helps people to edit, retrieve and understand large image collections. In the last decades, researchers have proposed many approaches to solve this task and achieved remarkable performance on several standard image datasets. In this paper, we train neural networks using visual and semantic ranking loss to learn visual-semantic embedding. This embedding can be easily applied to nearest-neighbor based models to boost their performance on image auto-annotation. We test our method on four challenging image datasets, reporting comparison results with existing works. Experimental results show that our method can be applied to several state-of-the-art nearest-neighbor based models including TagProp and 2PKNN, and significantly improves their performance.

论文关键词:Image auto-annotation, Visual-semantic embedding, Neural networks

论文评审过程:

论文官网地址:https://doi.org/10.1007/s11063-017-9753-9