Modeling Multi-Task Joint Training of Aggregate Networks for Multi-Modal Sarcasm Detection

讽刺 情态动词 计算机科学 模态(人机交互) 背景(考古学) 任务(项目管理) 人工智能 模式 机器学习 语音识别 工程类 语言学 古生物学 讽刺 系统工程 高分子化学 化学 社会学 哲学 生物 社会科学
作者
Lisong Ou,Zhixin Li
标识
DOI:10.1145/3652583.3658015
摘要

With the continuous emergence of various types of social media, which people often use to express their emotions in daily life, the multi-modal sarcasm detection (MSD) task has attracted more and more attention. However, due to the unique nature of sarcasm itself, there are still two main challenges on the way to achieving robust MSD: 1) existing mainstream methods often fail to take into account the problem of multi-modal weak correlation, thus ignoring the important sarcasm information of the uni-modal itself; 2) inefficiency in modeling cross-modal interactions in unaligned multi-modal data. Therefore, this paper proposes a multi-task jointly trained aggregation network (MTAN), which mainly adopts networks adapted to different modalities according to different modality processing tasks. Specifically, we design a multi-task CLIP framework that includes an uni-modal text task, an uni-modal image task, and a cross-modal interaction task, which can utilize sentiment cues from multiple tasks for multi-modal sarcasm detection. In addition, we design a global-local cross-modal interaction learning method that utilizes discourse-level representations from each modality as the global multi-modal context to interact with local uni-modal features, which not only avoids the secondary scaling cost of previous local-local cross-modal interaction methods but also allows the global multi-modal context and local uni-modal features to be mutually reinforcing and progressively improved through multi-layer superposition. After extensive experimental results and in-depth analysis, our model achieves state-of-the-art performance in multi-modal sarcasm detection.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
Akim应助科研通管家采纳,获得10
刚刚
刚刚
CipherSage应助科研通管家采纳,获得10
刚刚
刚刚
刚刚
刚刚
姜积木发布了新的文献求助10
1秒前
不攻自破发布了新的文献求助10
1秒前
1秒前
3秒前
powell发布了新的文献求助30
3秒前
啊啊啊啊发布了新的文献求助10
3秒前
3秒前
温两两发布了新的文献求助10
5秒前
陈泽宇发布了新的文献求助10
5秒前
Jie_Zhang发布了新的文献求助10
5秒前
6秒前
思源应助11采纳,获得10
6秒前
发文章发布了新的文献求助10
7秒前
8秒前
8秒前
8秒前
段采萱完成签到,获得积分10
8秒前
贲半梦发布了新的文献求助10
9秒前
量子星尘发布了新的文献求助10
9秒前
乐乐应助踏实小蘑菇采纳,获得10
10秒前
SYLH应助yu777采纳,获得30
10秒前
10秒前
浅色墨水完成签到,获得积分10
10秒前
Jasper应助ekko采纳,获得10
10秒前
zddhhh发布了新的文献求助10
11秒前
11秒前
个性思真完成签到,获得积分10
12秒前
脑洞疼应助lasu采纳,获得10
12秒前
渣渣发布了新的文献求助30
12秒前
奶昔源发布了新的文献求助10
12秒前
Ava应助安河桥采纳,获得10
13秒前
潇洒小天鹅完成签到,获得积分20
13秒前
memo发布了新的文献求助10
13秒前
13秒前
高分求助中
Ophthalmic Equipment Market by Devices(surgical: vitreorentinal,IOLs,OVDs,contact lens,RGP lens,backflush,diagnostic&monitoring:OCT,actorefractor,keratometer,tonometer,ophthalmoscpe,OVD), End User,Buying Criteria-Global Forecast to2029 2000
A new approach to the extrapolation of accelerated life test data 1000
Cognitive Neuroscience: The Biology of the Mind 1000
Cognitive Neuroscience: The Biology of the Mind (Sixth Edition) 1000
ACSM’s Guidelines for Exercise Testing and Prescription, 12th edition 588
Christian Women in Chinese Society: The Anglican Story 500
A Preliminary Study on Correlation Between Independent Components of Facial Thermal Images and Subjective Assessment of Chronic Stress 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 遗传学 基因 物理化学 催化作用 冶金 细胞生物学 免疫学
热门帖子
关注 科研通微信公众号,转发送积分 3961351
求助须知:如何正确求助?哪些是违规求助? 3507711
关于积分的说明 11137438
捐赠科研通 3240131
什么是DOI,文献DOI怎么找? 1790762
邀请新用户注册赠送积分活动 872504
科研通“疑难数据库(出版商)”最低求助积分说明 803271