脑电图
面部表情
计算机科学
情绪分类
眉毛
特征(语言学)
语音识别
大脑活动与冥想
特征选择
人工智能
认知
模式识别(心理学)
心理学
神经科学
沟通
语言学
哲学
作者
Yi Yang,Qiang Gao,Yu Song,Xiaolin Song,Zemin Mao,Junjie Liu
出处
期刊:IEEE Journal of Biomedical and Health Informatics
[Institute of Electrical and Electronics Engineers]
日期:2021-06-28
卷期号:26 (2): 589-599
被引量:30
标识
DOI:10.1109/jbhi.2021.3092412
摘要
With the development of sensor technology and learning algorithms, multimodal emotion recognition has attracted widespread attention. Many existing studies on emotion recognition mainly focused on normal people. Besides, due to hearing loss, deaf people cannot express emotions by words, which may have a greater need for emotion recognition. In this paper, the deep belief network (DBN) was utilized to classify three category emotions through the electroencephalograph (EEG) and facial expressions. Signals from 15 deaf subjects were recorded when they watched the emotional movie clips. Our system uses a 1-s window without overlap to segment the EEG signals in five frequency bands, then the differential entropy (DE) feature is extracted. The DE feature of EEG and facial expression images plays as multimodal input for subject-dependent emotion recognition. To avoid feature redundancy, the top 12 major EEG electrode channels (FP2, FP1, FT7, FPZ, F7, T8, F8, CB2, CB1, FT8, T7, TP8) in the gamma band and 30 facial expression features (the areas around the eyes and eyebrow) which are selected by the largest weight values. The results show that the classification accuracy is 99.92% by feature selection in deaf emotion reignition. Moreover, investigations on brain activities reveal deaf brain activity changes mainly in the beta and gamma bands, and the brain regions that are affected by emotions are mainly distributed in the prefrontal and outer temporal lobes.
科研通智能强力驱动
Strongly Powered by AbleSci AI