Predicting emotions in facial expressions from the annotations in naturally occurring first encounters
作者:
Highlights:
•
摘要
This paper deals with the automatic identification of emotions from the manual annotations of the shape and functions of facial expressions in a Danish corpus of video recorded naturally occurring first encounters. More specifically, a support vector classified is trained on the corpus annotations to identify emotions in facial expressions. In the classification experiments, we test to what extent emotions expressed in naturally-occurring conversations can be identified automatically by a classifier trained on the manual annotations of the shape of facial expressions and co-occurring speech tokens. We also investigate the relation between emotions and the communicative functions of facial expressions. Both emotion labels and their values in a three dimensional space are identified. The three dimensions are Pleasure, Arousal and Dominance.The results of our experiments indicate that the classifiers perform well in identifying emotions from the coarse-grained descriptions of facial expressions and co-occurring speech. The communicative functions of facial expressions also contribute to emotion identification. The results are promising because the emotion label list comprises fine grained emotions and affective states in naturally occurring conversations, while the shape features of facial expressions are very coarse grained. The classification results also assess that the annotation scheme combining a discrete and a dimensional description, and the manual annotations produced according to it are reliable and can be used to model and test emotional behaviours in emotional cognitive infocommunicative systems.
论文关键词:Multimodal corpus,Multimodal communication,Emotion,Machine learning,Feedback,Turn management,Annotation
论文评审过程:Received 24 November 2013, Revised 21 April 2014, Accepted 23 April 2014, Available online 4 May 2014.
论文官网地址:https://doi.org/10.1016/j.knosys.2014.04.034