Modelling and combining emotions, visual speech and gestures in virtual head models
作者:
Highlights:
•
摘要
This paper describes the generation and realistic blending of emotional facial expressions, visual speech, facial poses and other non-emotional secondary facial expressions on 3D computer graphical head models. The generation and blending of these expressions is done by means of the mathematical formulation of a psychological theory of facial expression generation. In total, 23 emotional expressions, 21 emotion blends, 19 visemes, 342 viseme blends and 37 secondary expressions and postures have been modelled, which can result in an infinite number of realistic facial expressions, due to the blending of these entities at different intensities that may vary continuously with time. The blending algorithm enables animators to script their animations at higher, more user-friendly levels or to use the results of artificial intelligence and computational psychological methods to generate and manage expressive, autonomous or near-autonomous virtual characters, without having to rely on performance-based methods.
论文关键词:Emotion–viseme–gesture blending,Psychology-based modelling,Facial animation
论文评审过程:Received 27 February 2005, Revised 19 February 2006, Accepted 20 February 2006, Available online 20 March 2006.
论文官网地址:https://doi.org/10.1016/j.image.2006.02.002