Adversarial co-distillation learning for image recognition

作者:

Highlights:

• In the process of knowledge co-distillation, we find that some special images, e.g., the divergent examples, can improve the generalization performance of the deep neural networks, and these images are scarce.

• For assisting knowledge co-distillation, we design an end-to-end framework named Adversarial Co-distillation Networks (ACNs) to generate extra divergent examples.

• To improve the quality of the divergent examples, we develop Weakly Residual Connection and Restricted Adversarial Search.

• We conduct extensive experiments with various architectures and datasets.

摘要

•In the process of knowledge co-distillation, we find that some special images, e.g., the divergent examples, can improve the generalization performance of the deep neural networks, and these images are scarce.•For assisting knowledge co-distillation, we design an end-to-end framework named Adversarial Co-distillation Networks (ACNs) to generate extra divergent examples.•To improve the quality of the divergent examples, we develop Weakly Residual Connection and Restricted Adversarial Search.•We conduct extensive experiments with various architectures and datasets.

论文关键词:Knowledge distillation,Data augmentation,Generative adversarial nets,Divergent examples,Image classification

论文评审过程:Received 21 May 2020, Revised 27 August 2020, Accepted 9 September 2020, Available online 10 September 2020, Version of Record 21 September 2020.

论文官网地址:https://doi.org/10.1016/j.patcog.2020.107659