Reshaping 3D facial scans for facial appearance modeling and 3D facial expression analysis

作者:

Highlights:

摘要

3D face scans have been widely used for face modeling and analysis. Due to the fact that face scans provide variable point clouds across frames, they may not capture complete facial data or miss point-to-point correspondences across various facial scans, thus causing difficulties to use such data for analysis. This paper presents an efficient approach to representing facial shapes from face scans through the reconstruction of face models based on regional information and a generic model. A new approach for 3D feature detection and a hybrid approach using two vertex mapping algorithms, displacement mapping and point-to-surface mapping, and a regional blending algorithm are proposed to reconstruct the facial surface detail. The resulting models can represent individual facial shapes consistently and adaptively, establishing facial point correspondences across individual models. The accuracy of the generated models is evaluated quantitatively. The applicability of the models is validated through the application of 3D facial expression recognition using the static 3DFE and dynamic 4DFE databases. A comparison with the state of the art has also been reported.

论文关键词:3D facial expression recognition,3D face modeling,3D face scans,3D model mapping

论文评审过程:Received 14 July 2011, Revised 30 November 2011, Accepted 30 December 2011, Available online 11 January 2012.

论文官网地址:https://doi.org/10.1016/j.imavis.2011.12.008