A Multitask learning model for multimodal sarcasm, sentiment and emotion recognition in conversations

讽刺 计算机科学 对话 情绪分析 人工智能 自然语言处理 杠杆(统计) 多任务学习 认知心理学 机器学习 人机交互 心理学 任务(项目管理) 讽刺 沟通 管理 经济 艺术 文学类
作者
Yazhou Zhang,Jinglin Wang,Yaochen Liu,Lu Rong,Qian Zheng,Dawei Song,Prayag Tiwari,Jing Qin
出处
期刊:Information Fusion [Elsevier]
卷期号:93: 282-301 被引量:50
标识
DOI:10.1016/j.inffus.2023.01.005
摘要

Sarcasm, sentiment and emotion are tightly coupled with each other in that one helps the understanding of another, which makes the joint recognition of sarcasm, sentiment and emotion in conversation a focus in the research in artificial intelligence (AI) and affective computing. Three main challenges exist: Context dependency, multimodal fusion and multitask interaction. However, most of the existing works fail to explicitly leverage and model the relationships among related tasks. In this paper, we aim to generically address the three problems with a multimodal joint framework. We thus propose a multimodal multitask learning model based on the encoder–decoder architecture, termed M2Seq2Seq. At the heart of the encoder module are two attention mechanisms, i.e., intramodal (Ia) attention and intermodal (Ie) attention. Ia attention is designed to capture the contextual dependency between adjacent utterances, while Ie attention is designed to model multimodal interactions. In contrast, we design two kinds of multitask learning (MTL) decoders, i.e., single-level and multilevel decoders, to explore their potential. More specifically, the core of a single-level decoder is a masked outer-modal (Or) self-attention mechanism. The main motivation of Or attention is to explicitly model the interdependence among the tasks of sarcasm, sentiment and emotion recognition. The core of the multilevel decoder contains the shared gating and task-specific gating networks. Comprehensive experiments on four bench datasets, MUStARD, Memotion, CMU-MOSEI and MELD, prove the effectiveness of M2Seq2Seq over state-of-the-art baselines (e.g., CM-GCN, A-MTL) with significant improvements of 1.9%, 2.0%, 5.0%, 0.8%, 4.3%, 3.1%, 2.8%, 1.0%, 1.7% and 2.8% in terms of Micro F1.

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
无限的晓蓝关注了科研通微信公众号
1秒前
zhazd发布了新的文献求助10
2秒前
3秒前
4秒前
橙酒发布了新的文献求助10
5秒前
nini应助出岫采纳,获得50
6秒前
杨佳莉完成签到,获得积分10
6秒前
顾矜应助科研通管家采纳,获得10
6秒前
核桃应助科研通管家采纳,获得10
6秒前
我是老大应助科研通管家采纳,获得10
6秒前
bkagyin应助科研通管家采纳,获得10
6秒前
情怀应助科研通管家采纳,获得10
6秒前
大佛应助科研通管家采纳,获得10
6秒前
酷波er应助科研通管家采纳,获得10
6秒前
852应助科研通管家采纳,获得10
6秒前
yuyu发布了新的文献求助20
7秒前
7秒前
7秒前
7秒前
7秒前
活泼的大船完成签到,获得积分10
7秒前
华仔应助xlz采纳,获得10
9秒前
10秒前
核桃发布了新的文献求助10
10秒前
10秒前
11秒前
geg发布了新的文献求助10
12秒前
12秒前
12秒前
12秒前
12秒前
12秒前
李爱国应助mumu采纳,获得10
12秒前
烟花应助聪明的鞅采纳,获得10
12秒前
yang完成签到,获得积分10
13秒前
光亮毛豆完成签到,获得积分10
14秒前
DrY发布了新的文献求助10
15秒前
Camellia发布了新的文献求助10
15秒前
脑洞疼应助自然的樱桃采纳,获得10
16秒前
17秒前
高分求助中
Theoretical Modelling of Unbonded Flexible Pipe Cross-Sections 10000
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Basic And Clinical Science Course 2025-2026 3000
《药学类医疗服务价格项目立项指南(征求意见稿)》 880
花の香りの秘密―遺伝子情報から機能性まで 800
Stop Talking About Wellbeing: A Pragmatic Approach to Teacher Workload 500
Principles of Plasma Discharges and Materials Processing, 3rd Edition 400
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 计算机科学 有机化学 物理 生物化学 纳米技术 复合材料 内科学 化学工程 人工智能 催化作用 遗传学 数学 基因 量子力学 物理化学
热门帖子
关注 科研通微信公众号,转发送积分 5615265
求助须知:如何正确求助?哪些是违规求助? 4700145
关于积分的说明 14906831
捐赠科研通 4741546
什么是DOI,文献DOI怎么找? 2548008
邀请新用户注册赠送积分活动 1511727
关于科研通互助平台的介绍 1473781