对话
情态动词
情绪识别
认知
语音识别
心理学
计算机科学
沟通
化学
神经科学
高分子化学
作者
Lili Guo,Song Yi,Shifei Ding
标识
DOI:10.1016/j.knosys.2024.111969
摘要
Emotion recognition in conversation (ERC) has gained considerable attention owing to its extensive applications in the field of human-computer interaction. However, previous models have had certain limitations in exploring the potential emotional relationships within the conversation due to their inability to fully leverage speaker information. Additionally, information from various modalities such as text, audio, and video can synergistically enhance and supplement the analysis of emotional context within the conversation. Nonetheless, effectively fusing multimodal features to understand the detailed contextual information in the conversation is challenging. This paper proposes a speaker-aware cognitive network with cross-modal attention (SACCMA) for multimodal ERC to effectively leverage multimodal information and speaker information. Our proposed model primarily consists of the modality encoder and the cognitive module. The modality encoder is employed to fuse multimodal feature information from speech, text, and vision using a cross-modal attention mechanism. Subsequently, the fused features and speaker information are separately fed into the cognitive module to enhance the perception of emotions within the dialogue. Compared to seven common baseline methods, our model increased the Accuracy score by 2.71% and 1.70% on the IEMOCAP and MELD datasets, respectively. Additionally, the F1 score improved by 2.92% and 0.70% for each dataset. Various experiments also demonstrate the effectiveness of our method.
科研通智能强力驱动
Strongly Powered by AbleSci AI