驾驶舱
计算机科学
特征(语言学)
实现(概率)
人机交互
语音识别
情绪识别
钥匙(锁)
工程类
计算机安全
哲学
语言学
统计
数学
航空航天工程
作者
Wenbo Li,Jiyong Xue,Ruichen Tan,Cong Wang,Zejian Deng,Shen Li,Gang Guo,Dongpu Cao
出处
期刊:IEEE transactions on intelligent vehicles
[Institute of Electrical and Electronics Engineers]
日期:2023-03-21
卷期号:8 (4): 2684-2697
被引量:13
标识
DOI:10.1109/tiv.2023.3259988
摘要
Affective interaction between the intelligent cockpit and humans is becoming an emerging topic full of opportunities. Robust recognition of the driver's emotions is the first step for affective interaction, and the intelligent cockpit recognizes emotions through the driver's speech, which has a wide range of technical application potential. In this paper, we first proposed a multi-feature fusion parallel structure speech emotion recognition network, which complementarily fuses the global acoustic features and local spectral features of the entire speech. Second, we designed and conducted the speech data collection under the driver's emotion and established the driver's speech emotion (SpeechEmo) dataset in the dynamic driving environment including 40 participants. Finally, the proposed model was validated on the SpeechEmo and public datasets, and quantitative analysis was carried out. It was found that the proposed model achieved advanced recognition performance, and the ablation experiments verified the importance of different components of the model. The proposed model and dataset are beneficial to the realization of human-vehicle affective interaction in intelligent cockpits in the future toward a better human experience.
科研通智能强力驱动
Strongly Powered by AbleSci AI