概化理论
计算机科学
模式
人工智能
模态(人机交互)
相似性(几何)
任务(项目管理)
情绪识别
多模式学习
机制(生物学)
机器学习
图像(数学)
心理学
社会科学
发展心理学
哲学
管理
认识论
社会学
经济
作者
Jinbao Xie,Wei Wang,Qingyan Wang,Yang Dali,Jinming Gu,Yongqiang Tang,Yury I. Varatnitski
出处
期刊:Neurocomputing
[Elsevier]
日期:2023-08-04
卷期号:556: 126649-126649
被引量:6
标识
DOI:10.1016/j.neucom.2023.126649
摘要
With new developments in the field of human–computer interaction, researchers are now paying attention to emotion recognition, especially multimodal emotion recognition, as emotion is a multidimensional expression. In this study, we propose a multimodal fusion emotion recognition method (MTL-BAM) based on multitask learning and the attention mechanism to tackle the major problems encountered in multimodal emotion recognition tasks regarding the lack of consideration of emotion interactions among modalities and the focus on emotion similarity among modalities while ignoring the differences. By improving the attention mechanism, the emotional contribution of each modality is further analyzed so that the emotional representations of each modality can learn from and complement each other to achieve better interactive fusion effect, thereby building a multitask learning framework. By introducing three types of monomodal emotion recognition tasks as auxiliary tasks, the model can detect emotion differences. Simultaneously, the label generation unit is introduced into the auxiliary tasks, and the monomodal emotion label value can be obtained more accurately through two proportional formulas while preventing the zero value problem. Our results show that the proposed method outperforms selected state-of-the-art methods on four evaluation indexes of emotion classification (i.e., accuracy, F1 score, MAE, and Pearson correlation coefficient). The proposed method achieved accuracy rates of 85.36% and 84.61% on the published multimodal datasets of CMU-MOSI and CMU-MOSEI, respectively, which are 2–6% higher than those of existing state-of-the-art models, demonstrating good multimodal emotion recognition performance and strong generalizability.
科研通智能强力驱动
Strongly Powered by AbleSci AI