Attentive matching network for few-shot learning

作者:

Highlights:

摘要

Few-shot learning has attracted increasing attention recently due to its broad applications. However, it remains unsolved for the difficulty of modeling under few data. In this paper, we present an effective framework named Attentive Matching Network (AMN) to address few-shot learning problem. Based on metric learning, AMN firstly learns robust representations via an elaborately designed embedding network using only few samples. And then distances between representations of support samples and target samples are calculated using similarity function to form a score vector, according to which classification is conducted. Different from existing algorithms, we propose a feature-level attention mechanism to help similarity function pay more emphasis on the features that better reflect the inter-class differences as well as to help embedding network learn better feature extraction capability. Furthermore, to learn a discriminative embedding space that maximizes inter-class distance and minimizes intra-class distance, we introduce a novel Complementary Cosine Loss, which consists of two parts: a modified Cosine Distance Loss for calculating distance between predicted category similarity and the true one that directly takes advantage of all support samples to compute gradients, and a Hardest-category Discernment Loss for handling the similarity of the hardest incorrect class. Results demonstrate that AMN achieves competitive performances on Omniglot and miniImageNet datasets. In addition, we conduct extensive experiments to discuss the influences of embedding network, attention mechanism and loss function.

论文关键词:

论文评审过程:Received 3 November 2018, Revised 11 May 2019, Accepted 26 July 2019, Available online 5 August 2019, Version of Record 4 September 2019.

论文官网地址:https://doi.org/10.1016/j.cviu.2019.07.001