Human emotion recognition based on time–frequency analysis of multivariate EEG signal

作者:

Highlights:

摘要

Understanding the expression of human emotional states plays a prominent role in interactive multimodal interfaces, affective computing, and the healthcare sector. Emotion recognition through electroencephalogram (EEG) signals is a simple, inexpensive, compact, and precise solution. This paper proposes a novel four-stage method for human emotion recognition using multivariate EEG signals. In the first stage, multivariate variational mode decomposition (MVMD) is employed to extract an ensemble of multivariate modulated oscillations (MMOs) from multichannel EEG signals. In the second stage, multivariate time–frequency (TF) images are generated using joint instantaneous amplitude (JIA), and joint instantaneous frequency (JIF) functions computed from the extracted MMOs. In the next stage, deep residual convolutional neural network ResNet-18 is customized to extract hidden features from the TF images. Finally, the classification is performed by the softmax layer. To further evaluate the performance of the model, various machine learning (ML) classifiers are employed. The feasibility and validity of the proposed method are verified using two different public emotion EEG datasets. The experimental results demonstrate that the proposed method outperforms the state-of-the-art emotion recognition methods with the best accuracy of 99.03, 97.59, and 97.75 percent for classifying arousal, dominance, and valence emotions, respectively. Our study reveals that TF-based multivariate EEG signal analysis using a deep residual network achieves superior performance in human emotion recognition.

论文关键词:Emotion recognition,Multivariate EEG,MVMD,Deep learning,Residual network,Time–frequency analysis

论文评审过程:Received 4 July 2021, Revised 4 October 2021, Accepted 2 December 2021, Available online 9 December 2021, Version of Record 28 December 2021.

论文官网地址:https://doi.org/10.1016/j.knosys.2021.107867