判别式
计算机科学
脑电图
人工智能
模式识别(心理学)
互补性(分子生物学)
子空间拓扑
支持向量机
语音识别
机器学习
心理学
精神科
生物
遗传学
作者
Peiliang Gong,Ziyu Jia,Pengpai Wang,Yueying Zhou,Daoqiang Zhang
标识
DOI:10.1145/3581783.3612208
摘要
Emotion recognition based on electroencephalography (EEG) has attracted significant attention and achieved considerable advances in the fields of affective computing and human-computer interaction. However, most existing studies ignore the coupling and complementarity of complex spatiotemporal patterns in EEG signals. Moreover, how to exploit and fuse crucial discriminative aspects in high redundancy and low signal-to-noise ratio EEG signals remains a great challenge for emotion recognition. In this paper, we propose a novel attention-based spatial-temporal dual-stream fusion network, named ASTDF-Net, for EEG-based emotion recognition. Specifically, ASTDF-Net comprises three main stages: first, the collaborative embedding module is designed to learn a joint latent subspace to capture the coupling of complicated spatiotemporal information in EEG signals. Second, stacked parallel spatial and temporal attention streams are employed to extract the most essential discriminative features and filter out redundant task-irrelevant factors. Finally, the hybrid attention-based feature fusion module is proposed to integrate significant features discovered from the dual-stream structure to take full advantage of the complementarity of the diverse characteristics. Extensive experiments on two publicly available emotion recognition datasets indicate that our proposed approach consistently outperforms state-of-the-art methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI