Fusion of facial expressions and EEG for implicit affective tagging

作者:

Highlights:

摘要

The explosion of user-generated, untagged multimedia data in recent years, generates a strong need for efficient search and retrieval of this data. The predominant method for content-based tagging is through slow, labor-intensive manual annotation. Consequently, automatic tagging is currently a subject of intensive research. However, it is clear that the process will not be fully automated in the foreseeable future. We propose to involve the user and investigate methods for implicit tagging, wherein users' responses to the interaction with the multimedia content are analyzed in order to generate descriptive tags.Here, we present a multi-modal approach that analyses both facial expressions and electroencephalography (EEG) signals for the generation of affective tags. We perform classification and regression in the valence-arousal space and present results for both feature-level and decision-level fusion. We demonstrate improvement in the results when using both modalities, suggesting the modalities contain complementary information.

论文关键词:Emotion classification,EEG,Facial expressions,Signal processing,Pattern classification,Affective computing

论文评审过程:Received 26 October 2011, Revised 3 September 2012, Accepted 17 October 2012, Available online 1 November 2012.

论文官网地址:https://doi.org/10.1016/j.imavis.2012.10.002