计算机科学
模式识别(心理学)
脑电图
人工智能
熵(时间箭头)
特征(语言学)
频道(广播)
语音识别
心理学
计算机网络
语言学
量子力学
精神科
物理
哲学
作者
Zhongmin Wang,Jiawen Zhang,Yan He,Jie Zhang
标识
DOI:10.1007/s10489-021-03070-2
摘要
Electroencephalogram (EEG) signal is a time-varying and nonlinear spatial discrete signal, which has been widely used in the field of emotion recognition. Up to now, a large number of studies have chosen time–frequency domain features or extracted features through brain networks. However, partial spatial or time–frequency information of EEG signals will be lost when analyzing from a single point of view. At the same time, the network analysis based on EEG is largely affected by the inherent volume effect of EEG. Therefore, how to eliminate the influence of volume effect on brain network analysis and extract the features that can reflect both time–frequency information and spatial information is the problem we need to solve at present. In this paper, a feature fusion method that can better reflect the emotional state is proposed. This method uses multichannel weighted multiscale permutation entropy (MC-WMPE) as the feature. It not only takes into account the time–frequency and spatial information of EEG signals but also eliminates the inherent volume effect of EEG signals. We first calculate the multiscale permutation entropy (MPE) of the EEG signals in each channel and construct the brain functional network by calculating the Pearson correlation coefficient (PCC) between each channel. PageRank algorithm is used to sort the importance of nodes in the brain functional network, and the weight of each node is obtained to screen out the important channels in emotion recognition. Then the weights of each channel and the MPE are weighted combined to obtain MC-WMPE as the feature. The research shows that both temporal information and spatial information are of great significance in processing EEG signals. Moreover, the analysis of the frontal, parietal and occipital lobes is necessary for studying the activity state of the cerebral cortex under emotional stimulation. Finally, we carried out experiments on the DEAP and SEED database, and the highest accuracy rate of emotion recognition with this combination feature is 85.28% and 87.31%.
科研通智能强力驱动
Strongly Powered by AbleSci AI