Robust face-voice based speaker identity verification using multilevel fusion

作者:

Highlights:

摘要

In this paper, we propose a robust multilevel fusion strategy involving cascaded multimodal fusion of audio–lip–face motion, correlation and depth features for biometric person authentication. The proposed approach combines the information from different audio–video based modules, namely: audio–lip motion module, audio–lip correlation module, 2D + 3D motion-depth fusion module, and performs a hybrid cascaded fusion in an automatic, unsupervised and adaptive manner, by adapting to the local performance of each module. This is done by taking the output-score based reliability estimates (confidence measures) of each of the module into account. The module weightings are determined automatically such that the reliability measure of the combined scores is maximised. To test the robustness of the proposed approach, the audio and visual speech (mouth) modalities are degraded to emulate various levels of train/test mismatch; employing additive white Gaussian noise for the audio and JPEG compression for the video signals. The results show improved fusion performance for a range of tested levels of audio and video degradation, compared to the individual module performances. Experiments on a 3D stereovision database AVOZES show that, at severe levels of audio and video mismatch, the audio, mouth, 3D face, and tri-module (audio–lip motion, correlation and depth) fusion EERs were 42.9%, 32%, 15%, and 7.3%, respectively, for biometric person authentication task.

论文关键词:Lip,3D Face,Voice,Biometric,Identity verification,Robust,Multilevel fusion

论文评审过程:Received 3 May 2006, Revised 22 February 2008, Accepted 28 February 2008, Available online 6 March 2008.

论文官网地址:https://doi.org/10.1016/j.imavis.2008.02.009