Multi-modal emotion recognition using EEG and speech signals

计算机科学 语音识别 脑电图 人工智能 支持向量机 稳健性(进化) 模式识别(心理学) 卷积神经网络 心理学 生物化学 化学 精神科 基因
作者
Qian Wang,Mou Wang,Yan Yang,Xiaolei Zhang
出处
期刊:Computers in Biology and Medicine [Elsevier]
卷期号:149: 105907-105907 被引量:46
标识
DOI:10.1016/j.compbiomed.2022.105907
摘要

Automatic Emotion Recognition (AER) is critical for naturalistic Human-Machine Interactions (HMI). Emotions can be detected through both external behaviors, e.g., tone of voice and internal physiological signals, e.g., electroencephalogram (EEG). In this paper, we first constructed a multi-modal emotion database, named Multi-modal Emotion Database with four modalities (MED4). MED4 consists of synchronously recorded signals of participants' EEG, photoplethysmography, speech and facial images when they were influenced by video stimuli designed to induce happy, sad, angry and neutral emotions. The experiment was performed with 32 participants in two environment conditions, a research lab with natural noises and an anechoic chamber. Four baseline algorithms were developed to verify the database and the performances of AER methods, Identification-vector + Probabilistic Linear Discriminant Analysis (I-vector + PLDA), Temporal Convolutional Network (TCN), Extreme Learning Machine (ELM) and Multi-Layer Perception Network (MLP). Furthermore, two fusion strategies on feature-level and decision-level respectively were designed to utilize both external and internal information of human status. The results showed that EEG signals generate higher accuracy in emotion recognition than that of speech signals (achieving 88.92% in anechoic room and 89.70% in natural noisy room vs 64.67% and 58.92% respectively). Fusion strategies that combine speech and EEG signals can improve overall accuracy of emotion recognition by 25.92% when compared to speech and 1.67% when compared to EEG in anechoic room and 31.74% and 0.96% in natural noisy room. Fusion methods also enhance the robustness of AER in the noisy environment. The MED4 database will be made publicly available, in order to encourage researchers all over the world to develop and validate various advanced methods for AER.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
大幅提高文件上传限制,最高150M (2024-4-1)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
伶俐绿柏完成签到,获得积分10
1秒前
panda完成签到,获得积分20
2秒前
丹霞应助朴实的绿兰采纳,获得10
2秒前
英姑应助朴实的绿兰采纳,获得10
2秒前
CipherSage应助朴实的绿兰采纳,获得10
2秒前
科研通AI2S应助朴实的绿兰采纳,获得10
2秒前
xiaojcom应助朴实的绿兰采纳,获得10
2秒前
科研通AI2S应助朴实的绿兰采纳,获得10
2秒前
充电宝应助朴实的绿兰采纳,获得10
2秒前
丹霞应助朴实的绿兰采纳,获得10
2秒前
耍酷的梦桃完成签到,获得积分10
3秒前
Lucas应助果不欺然采纳,获得10
3秒前
Aqua完成签到,获得积分10
4秒前
Akim应助sky1219采纳,获得20
4秒前
wisdom完成签到,获得积分10
4秒前
幽默泥猴桃完成签到,获得积分10
5秒前
不吃香菜完成签到 ,获得积分10
8秒前
8秒前
李爱国应助gyzsy采纳,获得10
8秒前
DW发布了新的文献求助10
8秒前
大卡司完成签到,获得积分10
9秒前
9秒前
筱溪完成签到 ,获得积分10
11秒前
研友_gnv61n完成签到,获得积分10
11秒前
追光完成签到,获得积分20
12秒前
月亮完成签到 ,获得积分10
13秒前
丹霞应助朴实的绿兰采纳,获得10
13秒前
vtfangfangfang完成签到,获得积分10
13秒前
14秒前
14秒前
范先生发布了新的文献求助10
14秒前
April完成签到 ,获得积分10
15秒前
淋漓尽致完成签到,获得积分10
19秒前
未晚发布了新的文献求助20
19秒前
追光发布了新的文献求助20
19秒前
20秒前
丹霞应助朴实的绿兰采纳,获得10
20秒前
FashionBoy应助阔达小懒虫采纳,获得10
20秒前
故意的冰淇淋完成签到 ,获得积分10
21秒前
21秒前
高分求助中
Evolution 10000
ISSN 2159-8274 EISSN 2159-8290 1000
Becoming: An Introduction to Jung's Concept of Individuation 600
Ore genesis in the Zambian Copperbelt with particular reference to the northern sector of the Chambishi basin 500
A new species of Coccus (Homoptera: Coccoidea) from Malawi 500
A new species of Velataspis (Hemiptera Coccoidea Diaspididae) from tea in Assam 500
PraxisRatgeber: Mantiden: Faszinierende Lauerjäger 500
热门求助领域 (近24小时)
化学 医学 生物 材料科学 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 基因 遗传学 催化作用 物理化学 免疫学 量子力学 细胞生物学
热门帖子
关注 科研通微信公众号,转发送积分 3162599
求助须知:如何正确求助?哪些是违规求助? 2813541
关于积分的说明 7900687
捐赠科研通 2473052
什么是DOI,文献DOI怎么找? 1316652
科研通“疑难数据库(出版商)”最低求助积分说明 631452
版权声明 602175