计算机科学
脑电图
人工智能
语音识别
情态动词
眼球运动
人工神经网络
运动(音乐)
模式识别(心理学)
计算机视觉
神经科学
心理学
哲学
化学
高分子化学
美学
作者
Baole Fu,Wenhao Chu,Chunrui Gu,Yinhua Liu
出处
期刊:IEEE Journal of Biomedical and Health Informatics
[Institute of Electrical and Electronics Engineers]
日期:2024-06-25
卷期号:28 (10): 5865-5876
标识
DOI:10.1109/jbhi.2024.3419043
摘要
Multimodal emotion recognition research is gaining attention because of the emerging trend of integrating information from different sensory modalities to improve performance. Electroencephalogram (EEG) signals are considered objective indicators of emotions and provide precise insights despite their complex data collection. In contrast, eye movement signals are more susceptible to environmental and individual differences but offer convenient data collection. Conventional emotion recognition methods typically use separate models for different modalities, potentially overlooking their inherent connections. This study introduces a cross-modal guiding neural network designed to fully leverage the strengths of both modalities. The network includes a dual-branch feature extraction module that simultaneously extracts features from EEG and eye movement signals. In addition, the network includes a feature guidance module that uses EEG features to direct eye movement feature extraction, reducing the impact of subjective factors. This study also introduces a feature reweighting module to explore emotion-related features within eye movement signals, thereby improving emotion classification accuracy. The empirical findings from both the SEED-IV dataset and our collected dataset substantiate the commendable performance of the model, thereby confirming its efficacy.
科研通智能强力驱动
Strongly Powered by AbleSci AI