计算机科学
判别式
人工智能
模式识别(心理学)
鉴别器
卷积神经网络
脑电图
特征(语言学)
短时傅里叶变换
语音识别
机器学习
傅里叶变换
数学
心理学
精神科
数学分析
哲学
探测器
电信
傅里叶分析
语言学
作者
Chao Li,Ning Bian,Ziping Zhao,Haishuai Wang,Björn Schüller
标识
DOI:10.1016/j.inffus.2023.102156
摘要
Current research suggests that there exist certain limitations in EEG emotion recognition, including redundant and meaningless time-frames and channels, as well as inter- and intra-individual differences in EEG signals from different subjects. To deal with these limitations, a Cross-attention-based Dilated Causal Convolutional Neural Network with Domain Discriminator (CADD-DCCNN) for multi-view EEG-based emotion recognition is proposed to minimize individual differences and automatically learn more discriminative emotion-related features. First, differential entropy (DE) features are obtained from the raw EEG signals using short-time Fourier transform (STFT). Second, each channel of the DE features is regarded as a view, and the attention mechanisms are utilized at different views to aggregate the discriminative affective information at the level of the time-frame of EEG. Then, a dilated causal convolutional neural network is employed to distill nonlinear relationships among different time frames. Next, a feature-level fusion is used to fuse features from multiple channels, aiming to explore the potential complementary information among different views and enhance the representational ability of the feature. Finally, to minimize individual differences, a domain discriminator is employed to generate domain-invariant features, which projects data from both the different domains into the same data representation space. We evaluated our proposed method on two public datasets, SEED and DEAP. The experimental results illustrate that our CADD-DCCNN method outperforms the SOTA methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI