Affect recognition from scalp-EEG using channel-wise encoder networks coupled with geometric deep learning and multi-channel feature fusion
作者:
Highlights:
•
摘要
The expression of human emotions is a complex process that often manifests through physiological and psychological traits and results in spatio-temporal brain activity. The brain activity can be captured with an electroencephalogram (EEG) and can be used for emotion recognition. In this paper, we present a novel approach to EEG-based emotion recognition (in terms of arousal, valence, and dominance) using unprocessed EEG signals. Input EEG samples are passed through channel-specific encoders consisting of SincNet based convolution blocks (filters are fine-tuned for the emotion recognition during learning) to learn high-level features related to the objectives. The resultant feature embeddings are then passed through a set of graph convolution networks to model the spatial propagation of brain activity under the assumption that the brain activity captured through an electrode is impacted by the brain activity captured by neighbouring electrodes. The channels are represented as nodes in a graph following the relative positioning of the electrodes during dataset acquisition. Multi-head attention is applied together with the graph convolutions to jointly attend to features from different representation sub-spaces, which leads to improved learning. The resultant features are then passed through a deep neural network-based multi-task classifier to identify the dimensional emotional states (low/high). Our proposed model achieves an accuracy of 88.24%, 88.80% and 88.22% for arousal, valence and dominance respectively using a 10-fold cross-validation; and 63.71%, 64.98% and 61.81% with Leave-One-Subject-Out cross-validation (LOSO) on the Dreamer dataset, and 69.72%, 69.43% and 70.72% for a LOSO evaluation on the DEAP dataset, surpassing state-of-the-art methods.
论文关键词:Graph networks,Multi-head attention,Multi-task learning,Multi-channel fusion,SincNet
论文评审过程:Received 22 October 2021, Revised 9 May 2022, Accepted 10 May 2022, Available online 17 May 2022, Version of Record 2 June 2022.
论文官网地址:https://doi.org/10.1016/j.knosys.2022.109038