期刊:IEEE Sensors Journal [Institute of Electrical and Electronics Engineers] 日期:2022-05-03卷期号:22 (12): 11954-11964被引量:20
标识
DOI:10.1109/jsen.2022.3172133
摘要
Automatic emotion recognition based on multichannel electroencephalogram (EEG) data is a fundamental but challenging problem. Some previous researches ignore the correlation information of brain activity among the inter-channel and inter-frequency bands, which may provide potential information related to emotional states. In this work, we propose a 3-D feature construction method based on spatial-spectral information. First, power values per channel are arranged into a 2-D spatial feature representation according to the position of electrodes. Then, features from different frequency bands are arranged into a 3-D integration feature tensor to capture their complementary information. Simultaneously, we propose a novel framework based on feature fusion modules and dilated bottleneck-based convolutional neural networks (DBCN) which builds a more discriminative model to process the 3-D features for EEG emotion recognition. Both participant-dependent and participant-independent protocols are conducted to evaluate the performance of the proposed DBCN on the DEAP benchmark datasets. Mean 2-class classification accuracies of 89.67% / 90.93% (for participant-dependent) and 79.45% / 83.98% (for participant-independent) were respectively achieved for arousal / valence. These results suggest the proposed method based on the integration of spatial and spectral information could be extended to the assessment of mood disorder and human-computer interaction (HCI) applications.