脑电图
面部表情
人工智能
悲伤
语音识别
心理学
模式识别(心理学)
感觉运动节律
情绪分类
计算机科学
脑-机接口
听力学
愤怒
神经科学
医学
精神科
作者
Dahua Li,Jiayin Liu,Yi Yang,Fazheng Hou,Haotian Song,Yu Song,Qiang Gao,Zemin Mao
出处
期刊:IEEE Transactions on Neural Systems and Rehabilitation Engineering
[Institute of Electrical and Electronics Engineers]
日期:2023-01-01
卷期号:31: 437-445
被引量:5
标识
DOI:10.1109/tnsre.2022.3225948
摘要
Emotion analysis has been employed in many fields such as human-computer interaction, rehabilitation, and neuroscience. But most emotion analysis methods mainly focus on healthy controls or depression patients. This paper aims to classify the emotional expressions in individuals with hearing impairment based on EEG signals and facial expressions. Two kinds of signals were collected simultaneously when the subjects watched affective video clips, and we labeled the video clips with discrete emotional states (fear, happiness, calmness, and sadness). We extracted the differential entropy (DE) features based on EEG signals and converted DE features into EEG topographic maps (ETM). Next, the ETM and facial expressions were fused by the multichannel fusion method. Finally, a deep learning classifier CBAM_ResNet34 combined Residual Network (ResNet) and Convolutional Block Attention Module (CBAM) was used for subject-dependent emotion classification. The results show that the average classification accuracy of four emotions recognition after multimodal fusion achieves 78.32%, which is higher than 67.90% for facial expressions and 69.43% for EEG signals. Moreover, visualization by the Gradient-weighted Class Activation Mapping (Grad-CAM) of ETM showed that the prefrontal, temporal and occipital lobes were the brain regions closely related to emotional changes in individuals with hearing impairment.
科研通智能强力驱动
Strongly Powered by AbleSci AI