Data-free knowledge distillation in neural networks for regression
作者:
Highlights:
• We present data-free knowledge distillation method for regression.
• It adopts generator that creates synthetic data to transfer knowledge to student.
• Given teacher, generator and student are trained in adversarial manner.
• Generator is trained to synthesize data on which student is unable to mimic teacher.
• Student is trained on synthetic data to mimic teacher’s predictions.
摘要
•We present data-free knowledge distillation method for regression.•It adopts generator that creates synthetic data to transfer knowledge to student.•Given teacher, generator and student are trained in adversarial manner.•Generator is trained to synthesize data on which student is unable to mimic teacher.•Student is trained on synthetic data to mimic teacher’s predictions.
论文关键词:Neural network,Knowledge distillation,Data-free knowledge distillation,Zero-shot knowledge distillation,Regression
论文评审过程:Received 12 December 2020, Revised 15 February 2021, Accepted 27 February 2021, Available online 6 March 2021, Version of Record 18 March 2021.
论文官网地址:https://doi.org/10.1016/j.eswa.2021.114813