Harnessing noisy Web images for deep representation
作者:
Highlights:
•
摘要
The keep-growing content of Web images is probably the next important data source to scale up deep neural networks which recently surpass human in image classification tasks. The fact that deep networks are hungry for labelled data limits themselves from extracting valuable information of Web images which are abundant and cheap. There have been efforts to train neural networks such as autoencoders with respect to either unsupervised or semi-supervised settings. Nonetheless they are less performant than supervised methods partly because the loss function used in unsupervised methods, for instance Euclidean loss, failed to guide the network to learn discriminative features and ignore unnecessary details. We instead train convolutional networks in a supervised setting but use weakly labelled data which are large amounts of unannotated Web images downloaded from Flickr and Bing. Our experiments are conducted at several data scales, with different choices of network architecture, and alternating between different data preprocessing techniques. The effectiveness of our approach is shown by the good generalization of the learned representations with new six public datasets.
论文关键词:
论文评审过程:Received 7 June 2016, Revised 4 January 2017, Accepted 27 January 2017, Available online 29 January 2017, Version of Record 17 December 2017.
论文官网地址:https://doi.org/10.1016/j.cviu.2017.01.009