Adaptive Deep Disturbance-Disentangled Learning for Facial Expression Recognition
作者:Delian Ruan, Rongyun Mo, Yan Yan, Si Chen, Jing-Hao Xue, Hanzi Wang
摘要
In this paper, we propose a novel adaptive deep disturbance-disentangled learning (ADDL) method for effective facial expression recognition (FER). ADDL involves a two-stage learning procedure. First, a disturbance feature extraction model is trained to identify multiple disturbing factors on a large-scale face database involving disturbance label information. Second, an adaptive disturbance-disentangled model, which contains a global shared subnetwork and two task-specific subnetworks, is designed and learned to explicitly disentangle disturbing factors from facial expression images. In particular, the expression subnetwork leverages a multi-level attention mechanism to extract expression-specific features, while the disturbance subnetwork embraces a new adaptive disturbance feature learning module to extract disturbance-specific features based on adversarial transfer learning. Moreover, a mutual information neural estimator is adopted to minimize the correlation between expression-specific and disturbance-specific features. Extensive experimental results on both in-the-lab FER databases (including CK+, MMI, and Oulu-CASIA) and in-the-wild FER databases (including RAF-DB, SFEW, Aff-Wild2, and AffectNet) show that our proposed method consistently outperforms several state-of-the-art FER methods. This clearly demonstrates the great potential of disturbance disentanglement for FER. Our code is available at https://github.com/delian11/ADDL.
论文关键词:Facial expression recognition, Multi-task learning, Adversarial transfer learning, Multi-level attention
论文评审过程:
论文官网地址:https://doi.org/10.1007/s11263-021-01556-7