Feature hallucination via Maximum A Posteriori for few-shot learning
作者:
Highlights:
•
摘要
Few-shot learning aims to train an effective classifier in a small data regime. Due to the scarcity of training samples (usually as small as 1 or 5), traditional deep learning solutions often suffer from overfitting. To address this issue, an intuitive idea is to augment or hallucinate sufficient training data. For this purpose, in this paper, we propose a simple yet effective method to build a model for novel categories with few samples. Specifically, we assume that each category in the base set follows a Gaussian distribution, so that we can employ Maximum A Posteriori (MAP) to estimate the distribution of a novel category with even one example. To achieve this goal, we first transform each base category into Gaussian form with power transformation for MAP estimation. Then, we estimate the Gaussian mean of the novel category under the Gaussian prior given few samples from it. Finally, each novel category is represented by a unique Gaussian distribution, where sufficient trainable features can be sampled to obtain a highly accurate classifier for final predictions. Experimental results on four few-shot benchmarks show that it significantly outperforms the baseline methods on both 1- and 5-shot tasks. Extensive results on cross-domain tasks and visualization of estimated feature distribution also demonstrate its effectiveness.
论文关键词:Few-shot learning,Maximum A Posteriori,Feature hallucination,Gaussian prior
论文评审过程:Received 27 February 2021, Revised 30 April 2021, Accepted 5 May 2021, Available online 8 May 2021, Version of Record 12 May 2021.
论文官网地址:https://doi.org/10.1016/j.knosys.2021.107129