Conversion of articulatory parameters into active shape model coefficients for lip motion representation and synthesis

作者:

Highlights:

摘要

Speech-driven facial animation combines techniques from different disciplines such as image analysis, computer graphics, and speech analysis. Active shape models (ASM) used in image analysis are excellent tools for characterizing lip contour shapes and approximating their motion in image sequences. By controlling the coefficients for an ASM, such a model can also be used for animation. We design a mapping of the articulatory parameters used in phonetics into ASM coefficients that control nonrigid lip motion. The mapping is designed to minimize the approximation error when articulatory parameters measured on training lip contours are taken as input to synthesize the training lip movements. Since articulatory parameters can also be estimated from speech, the proposed technique can form an important component of a speech-driven facial animation system.

论文关键词:Bimodal speech processing,Facial animation,Articulatory parameters,Active shape models

论文评审过程:Received 15 January 1997, Available online 23 November 1998.

论文官网地址:https://doi.org/10.1016/S0923-5965(98)00006-X