Co-Learning for Few-Shot Learning
作者:Rui Xu, Lei Xing, Shuai Shao, Baodi Liu, Kai Zhang, Weifeng Liu
摘要
Few-shot learning (FSL), aiming to address the problem of data scarcity, is a hot topic of current researches. The most commonly used FSL framework is composed of two components: (1) Pre-train. Employing the base data to train a feature extraction model (FEM). (2) Meta-test. Utilizing the trained FEM to extract novel data’s feature embedding and then recognize them with the to-be-designed classifier. Due to the constraint of limited labeled samples, some researchers attempt to exploit the unlabeled samples to strengthen the classifier by introducing self-training strategy. However, a single classifier (based on scarce labeled samples) usually incorrectly identifies unlabeled sample classes as its insufficient discrimination, which is dubbed as Single-Classifier-Misclassify-Data (SCMD) problem. To address this fundamental problem, we design a Co-learning (CL) method for FSL. Typically, we find that different classifiers have different adaptability to the same feature distribution. Hence we try to exploit two basic classifiers to separately infer pseudo-labels for unlabeled samples, and crossly expand them to the labeled data. The two complementary classifiers make the predicted accuracy more reliable. We evaluate our CL on five benchmark datasets (e.g., mini-ImageNet, tiered-ImageNet, CIFAR-FS, FC100, CUB), and can exceed other state-of-the-arts 0.63–4.6%. The outstanding performance has demonstrated the efficiency of our method.
论文关键词:Few-shot learning (FSL), feature extraction model (FEM), Single-Classifier-Misclassify-Data (SCMD) problem, Co-learning (CL)
论文评审过程:
论文官网地址:https://doi.org/10.1007/s11063-022-10770-4