meshSIFT: Local surface features for 3D face recognition under expression variations and partial data
作者:
Highlights:
•
摘要
Matching 3D faces for recognition is a challenging task caused by the presence of expression variations, missing data, and outliers. In this paper the meshSIFT algorithm and its use for 3D face recognition is presented. This algorithm consists of four major components. First, salient points on the 3D facial surface are detected as mean curvature extrema in scale space. Second, orientations are assigned to each of these salient points. Third, the neighbourhood of each salient point is described in a feature vector consisting of concatenated histograms of shape indices and slant angles. Fourth, the feature vectors of two 3D facial surfaces are reliably matched by comparing the angles in feature space. This results in an algorithm which is robust to expression variations, missing data and outliers.As a first contribution, we demonstrate that the number of matching meshSIFT features is a reliable measure for expression-invariant face recognition, as shown by the rank 1 recognition rate of 93.7% and 89.6% for the Bosphorus and FRGC v2 database, respectively. Next, we demonstrate that symmetrising the feature descriptors allows comparing two 3D facial surfaces with limited or no overlap. Validation on the data of the “SHREC’11: Face Scans” contest, containing many partial scans, resulted in a recognition rate of 98.6%, clearly outperforming all other participants in the challenge. Finally, we also demonstrate the use of meshSIFT for two other problems related with 3D face recognition: pose normalisation and symmetry plane estimation. For both problems, applying meshSIFT in combination with RANSAC resulted in a correct solution for ±90% of all Bosphorus database meshes (except ±90° and ±45° rotations).
论文关键词:
论文评审过程:Received 22 September 2011, Accepted 5 October 2012, Available online 1 November 2012.
论文官网地址:https://doi.org/10.1016/j.cviu.2012.10.002