Multization: Multi-Modal Summarization Enhanced by Multi-Contextually Relevant and Irrelevant Attention Alignment

自动汇总 情态动词 认知心理学 心理学 计算机科学 人工智能 化学 高分子化学
作者
Huan Rong,Zhongfeng Chen,Zhenyu Lu,Fan Xu,Victor S. Sheng
出处
期刊:ACM Transactions on Asian and Low-Resource Language Information Processing 卷期号:23 (5): 1-29
标识
DOI:10.1145/3651983
摘要

This article focuses on the task of Multi-Modal Summarization with Multi-Modal Output for China JD.COM e-commerce product description containing both source text and source images. In the context learning of multi-modal (text and image) input, there exists a semantic gap between text and image, especially in the cross-modal semantics of text and image. As a result, capturing shared cross-modal semantics earlier becomes crucial for multi-modal summarization. However, when generating the multi-modal summarization, based on the different contributions of input text and images, the relevance and irrelevance of multi-modal contexts to the target summary should be considered, so as to optimize the process of learning cross-modal context to guide the summary generation process and to emphasize the significant semantics within each modality. To address the aforementioned challenges, Multization has been proposed to enhance multi-modal semantic information by multi-contextually relevant and irrelevant attention alignment. Specifically, a Semantic Alignment Enhancement mechanism is employed to capture shared semantics between different modalities (text and image), so as to enhance the importance of crucial multi-modal information in the encoding stage. Additionally, the IR-Relevant Multi-Context Learning mechanism is utilized to observe the summary generation process from both relevant and irrelevant perspectives, so as to form a multi-modal context that incorporates both text and image semantic information. The experimental results in the China JD.COM e-commerce dataset demonstrate that the proposed Multization method effectively captures the shared semantics between the input source text and source images, and highlights essential semantics. It also successfully generates the multi-modal summary (including image and text) that comprehensively considers the semantics information of both text and image.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI

祝大家在新的一年里科研腾飞
更新
大幅提高文件上传限制,最高150M (2024-4-1)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
1秒前
糖糖完成签到,获得积分20
1秒前
CXX完成签到,获得积分10
1秒前
1秒前
1秒前
youmuyou完成签到,获得积分10
2秒前
zhuboujs完成签到,获得积分10
2秒前
longjie完成签到,获得积分0
3秒前
3秒前
javaxixi完成签到,获得积分20
3秒前
993xd完成签到 ,获得积分10
3秒前
小刘完成签到,获得积分10
4秒前
HDrinnk完成签到,获得积分10
5秒前
我是老大应助sangsang采纳,获得10
5秒前
栗子完成签到 ,获得积分10
6秒前
Martin完成签到 ,获得积分10
6秒前
xczhu发布了新的文献求助10
7秒前
7秒前
addestay完成签到 ,获得积分10
7秒前
东方雨季发布了新的文献求助10
7秒前
努力看文献的小杨完成签到,获得积分10
7秒前
敬老院1号应助郦稀采纳,获得50
8秒前
8秒前
dunhuang完成签到,获得积分10
9秒前
沉静从蓉发布了新的文献求助10
9秒前
ding应助jdh采纳,获得10
9秒前
科研通AI2S应助打爆英语采纳,获得10
10秒前
无心的青槐完成签到,获得积分10
10秒前
念念发布了新的文献求助30
10秒前
晴小晴完成签到,获得积分10
10秒前
11秒前
杨老师完成签到 ,获得积分10
11秒前
慕青应助YXH采纳,获得10
11秒前
张张呀发布了新的文献求助30
11秒前
一只耳完成签到,获得积分10
11秒前
祖问筠完成签到,获得积分10
12秒前
12秒前
12秒前
田様应助Unicorn采纳,获得10
12秒前
13秒前
高分求助中
Востребованный временем 2500
The Three Stars Each: The Astrolabes and Related Texts 1000
Les Mantodea de Guyane 800
Mantids of the euro-mediterranean area 700
Plate Tectonics 500
Igneous rocks and processes: a practical guide(第二版) 500
Mantodea of the World: Species Catalog 500
热门求助领域 (近24小时)
化学 医学 生物 材料科学 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 基因 遗传学 物理化学 催化作用 细胞生物学 免疫学 冶金
热门帖子
关注 科研通微信公众号,转发送积分 3408487
求助须知:如何正确求助?哪些是违规求助? 3012625
关于积分的说明 8855058
捐赠科研通 2699846
什么是DOI,文献DOI怎么找? 1480188
科研通“疑难数据库(出版商)”最低求助积分说明 684209
邀请新用户注册赠送积分活动 678506