Lie group continual meta learning algorithm

作者:Mengjuan Jiang, Fanzhang Li

摘要

Humans can use acquired experience to learn new skills quickly and without forgetting the knowledge they already have. However, the neural network cannot do continual learning like humans, because it is easy to fall into the stability-plasticity dilemma and lead to catastrophic forgetting. Since meta-learning with the already acquired knowledge as a priori can directly optimize the final goal, this paper proposes LGCMLA (Lie Group Continual Meta Learning Algorithm) based on meta-learning, this algorithm is an improvement of CMLA (Continual Meta Learning Algorithm) proposed by Jiang et al. On the one hand, LGCMLA enhances the continuity between tasks by changing the inner-loop update rule (from using random initialization parameters for each task to using the updated parameters of the previous task for the subsequent task). On the other hand, it uses orthogonal groups to limit the parameter space and adopts the natural Riemannian gradient descent to accelerate the convergence speed. It not only corrects the shortcomings of poor convergence and stability of CMLA, but also further improves the generalization performance of the model and solves the stability-plasticity dilemma more effectively. Experiments on miniImageNet, tieredImageNet and Fewshot-CIFAR100 (Canadian Institute For Advanced Research) datasets prove the effectiveness of LGCMLA. Especially compared to MAML (Model-Agnostic Meta-Learning) with standard four-layer convolution, the accuracy of 1 shot and 5 shot is improved by 16.4% and 17.99% respectively under the setting of 5-way on miniImageNet.

论文关键词:Lie group continual meta learning algorithm, Catastrophic forgetting, Meta-learning, Orthogonal group, Natural riemannian gradient

论文评审过程:

论文官网地址:https://doi.org/10.1007/s10489-021-03036-4