Contrastive attention network with dense field estimation for face completion

作者:

Highlights:

• We propose a Siamese inference network based on contrastive learning for face completion. It helps to improve the robustness and accuracy of representation learning for complex mask patterns.

• We propose a novel dual attention fusion module that can explore feature interdependencies in spatial and channel dimensions and blend features in missing regions and known regions naturally. Smooth contents with rich texture information can be naturally synthesized.

• To keep structural information of the input intact, the dense correspondence field that binds 2D and 3D surface spaces is estimated in our network, which can preserve the expression and pose of the input.

• Our proposed method achieves smooth inpainting results with rich texture and reasonable topological structural information on three standard datasets against state-of-the-art methods, and also greatly improves the performance of face verification.

摘要

•We propose a Siamese inference network based on contrastive learning for face completion. It helps to improve the robustness and accuracy of representation learning for complex mask patterns.•We propose a novel dual attention fusion module that can explore feature interdependencies in spatial and channel dimensions and blend features in missing regions and known regions naturally. Smooth contents with rich texture information can be naturally synthesized.•To keep structural information of the input intact, the dense correspondence field that binds 2D and 3D surface spaces is estimated in our network, which can preserve the expression and pose of the input.•Our proposed method achieves smooth inpainting results with rich texture and reasonable topological structural information on three standard datasets against state-of-the-art methods, and also greatly improves the performance of face verification.

论文关键词:Face completion,Unsupervised learning,Attention mechanism,3D Face analysis

论文评审过程:Received 23 January 2021, Revised 20 November 2021, Accepted 26 November 2021, Available online 27 November 2021, Version of Record 5 December 2021.

论文官网地址:https://doi.org/10.1016/j.patcog.2021.108465