计算机科学
面部表情
情绪识别
卷积神经网络
价(化学)
任务(项目管理)
唤醒
模式
感知
情态动词
领域(数学)
语音识别
人工智能
心理学
社会科学
化学
物理
数学
管理
量子力学
神经科学
社会学
高分子化学
纯数学
经济
作者
Guoliang Xiang,Song Yao,Hanwen Deng,Xianhui Wu,Xinghua Wang,Qian Xu,Tianjian Yu,Kui Wang,Yong Peng
标识
DOI:10.1016/j.engappai.2023.107772
摘要
To address the limitations of databases in the field of emotion recognition and to cater to the trend of integrating data from multiple sources, we have established a multi-modal emotional dataset based on spontaneous expression of drivers. By selecting emotional induction materials and inducing emotions before each driving task, facial expression videos and synchronous physiological signals of the drivers during driving were collected. The dataset includes records of 64 participants under five different emotions (neutral, happy, angry, sad, and fear), and the emotional valence, arousal, and peak time of all participants in each driving task were recorded. To analyze the dataset, spatio-temporal convolutional neural networks were designed to analyze the different modalities of data with varying durations in the dataset, aiming to investigate their performance in emotion recognition. The results demonstrate that the fusion of multi-modal data significantly improves the accuracy of driver's emotion recognition, with accuracy increases of 11.28% and 6.83% compared to using only facial video signals or physiological signals, respectively. Therefore, the publication and analysis of multi-modal emotional data for driving scenarios is crucial to support further research in the fields of multimodal perception and intelligent transportation engineering.
科研通智能强力驱动
Strongly Powered by AbleSci AI