Revisiting Disentanglement and Fusion on Modality and Context in Conversational Multimodal Emotion Recognition

计算机科学 背景(考古学) 特征(语言学) 任务(项目管理) 对话 多模态 人工智能 话语 模态(人机交互) 语境化 人机交互 机器学习 语音识别 语言学 心理学 沟通 工程类 古生物学 哲学 系统工程 万维网 口译(哲学) 生物 程序设计语言
作者
Bobo Li,Hao Fei,Lizi Liao,Yu Zhao,Chong Teng,Tat‐Seng Chua,Donghong Ji,Fei Li
标识
DOI:10.1145/3581783.3612053
摘要

It has been a hot research topic to enable machines to understand human emotions in multimodal contexts under dialogue scenarios, which is tasked with multimodal emotion analysis in conversation (MM-ERC). MM-ERC has received consistent attention in recent years, where a diverse range of methods has been proposed for securing better task performance. Most existing works treat MM-ERC as a standard multimodal classification problem and perform multimodal feature disentanglement and fusion for maximizing feature utility. Yet after revisiting the characteristic of MM-ERC, we argue that both the feature multimodality and conversational contextualization should be properly modeled simultaneously during the feature disentanglement and fusion steps. In this work, we target further pushing the task performance by taking full consideration of the above insights. On the one hand, during feature disentanglement, based on the contrastive learning technique, we devise a Dual-level Disentanglement Mechanism (DDM) to decouple the features into both the modality space and utterance space. On the other hand, during the feature fusion stage, we propose a Contribution-aware Fusion Mechanism (CFM) and a Context Refusion Mechanism (CRM) for multimodal and context integration, respectively. They together schedule the proper integrations of multimodal and context features. Specifically, CFM explicitly manages the multimodal feature contributions dynamically, while CRM flexibly coordinates the introduction of dialogue contexts. On two public MM-ERC datasets, our system achieves new state-of-the-art performance consistently. Further analyses demonstrate that all our proposed mechanisms greatly facilitate the MM-ERC task by making full use of the multimodal and context features adaptively. Note that our proposed methods have the great potential to facilitate a broader range of other conversational multimodal tasks.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
刚刚
1秒前
lfg发布了新的文献求助10
1秒前
Lucas应助匆匆走过采纳,获得10
1秒前
Along发布了新的文献求助10
1秒前
2秒前
LLL发布了新的文献求助10
3秒前
Dudadadaa完成签到,获得积分10
3秒前
爆米花应助林希希采纳,获得10
4秒前
sususuper发布了新的文献求助10
4秒前
123123发布了新的文献求助10
4秒前
贾舒涵发布了新的文献求助10
4秒前
雨天有伞完成签到,获得积分10
5秒前
无为应助美好眼神采纳,获得10
5秒前
科目三应助鱼鱼采纳,获得10
5秒前
6秒前
呵呵完成签到,获得积分10
6秒前
Raien发布了新的文献求助10
6秒前
6秒前
sss发布了新的文献求助10
7秒前
雨霁发布了新的文献求助30
7秒前
华仔应助power采纳,获得10
8秒前
可爱的函函应助甜甜映菡采纳,获得10
10秒前
研友_VZG7GZ应助Dudadadaa采纳,获得10
11秒前
11秒前
科研通AI2S应助雨下听风采纳,获得10
11秒前
Beyond完成签到,获得积分10
11秒前
12秒前
13秒前
15秒前
打打应助lee采纳,获得10
15秒前
Ava应助77采纳,获得30
16秒前
17秒前
btutou发布了新的文献求助10
17秒前
19秒前
19秒前
善良夜梅发布了新的文献求助10
19秒前
19秒前
20秒前
左丘丹烟完成签到,获得积分10
20秒前
高分求助中
Continuum Thermodynamics and Material Modelling 3000
Production Logging: Theoretical and Interpretive Elements 2700
Mechanistic Modeling of Gas-Liquid Two-Phase Flow in Pipes 2500
Comprehensive Computational Chemistry 1000
Kelsen’s Legacy: Legal Normativity, International Law and Democracy 1000
Conference Record, IAS Annual Meeting 1977 610
Interest Rate Modeling. Volume 3: Products and Risk Management 600
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 有机化学 生物化学 物理 纳米技术 计算机科学 内科学 化学工程 复合材料 基因 遗传学 物理化学 催化作用 量子力学 光电子学 冶金
热门帖子
关注 科研通微信公众号,转发送积分 3552503
求助须知:如何正确求助?哪些是违规求助? 3128579
关于积分的说明 9378740
捐赠科研通 2827750
什么是DOI,文献DOI怎么找? 1554537
邀请新用户注册赠送积分活动 725515
科研通“疑难数据库(出版商)”最低求助积分说明 714980