Cross-modality synergy network for referring expression comprehension and segmentation

计算机科学 模态(人机交互) 特征(语言学) 参照物 人工智能 表达式(计算机科学) 模式 自然语言处理 自然语言 代表(政治) 相似性(几何) 图像(数学) 语言学 政治 哲学 社会学 政治学 程序设计语言 法学 社会科学
作者
Qian‐Zhong Li,Yujia Zhang,Shiying Sun,Jinting Wu,Xiaoguang Zhao,Min Tan
出处
期刊:Neurocomputing [Elsevier]
卷期号:467: 99-114 被引量:17
标识
DOI:10.1016/j.neucom.2021.09.066
摘要

Referring expression comprehension and segmentation aim to locate and segment a referred instance in an image according to a natural language expression. However, existing methods tend to ignore the interaction between visual and language modalities for visual feature learning, and establishing a synergy between the visual and language modalities remains a considerable challenge. To tackle the above problems, we propose a novel end-to-end framework, Cross-Modality Synergy Network (CMS-Net), to address the two tasks jointly. In this work, we propose an attention-aware representation learning module to learn modal representations for both images and expressions. A language self-attention submodule is proposed in this module to learn expression representations by leveraging the intra-modality relations, and a language-guided channel-spatial attention submodule is introduced to obtain the language-aware visual representations under language guidance, which helps the model pay more attention to the referent-relevant regions in the images and relieve background interference. Then, we design a cross-modality synergy module to establish the inter-modality relations for modality fusion. Specifically, a language-visual similarity is obtained at each position of the visual feature map, and the synergy is achieved between the two modalities in both semantic and spatial dimensions. Furthermore, we propose a multi-scale feature fusion module with a selective strategy to aggregate the important information from multi-scale features, yielding target results. We conduct extensive experiments on four challenging benchmarks, and our framework achieves significant performance gains over state-of-the-art methods.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
大幅提高文件上传限制,最高150M (2024-4-1)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
CipherSage应助zhuzhuxia采纳,获得30
刚刚
好好驳回了Akim应助
2秒前
漂亮的素发布了新的文献求助10
2秒前
3秒前
zzz发布了新的文献求助10
3秒前
852应助xzx采纳,获得10
4秒前
英姑应助天真水香采纳,获得10
4秒前
tcl1998完成签到,获得积分10
4秒前
5秒前
5秒前
6秒前
6秒前
样子完成签到,获得积分20
7秒前
生椰拿铁完成签到 ,获得积分10
8秒前
8秒前
10秒前
Xiongxx发布了新的文献求助10
10秒前
FashionBoy应助Qiu采纳,获得10
11秒前
11秒前
11秒前
竹筏过海应助研友_LMBa6n采纳,获得30
13秒前
yang发布了新的文献求助20
13秒前
赘婿应助皮皮最可爱采纳,获得10
14秒前
久而久之完成签到 ,获得积分10
14秒前
14秒前
ding应助科研通管家采纳,获得10
14秒前
共享精神应助科研通管家采纳,获得10
14秒前
janarbek应助科研通管家采纳,获得10
14秒前
英姑应助科研通管家采纳,获得10
14秒前
Ava应助科研通管家采纳,获得10
14秒前
14秒前
Hello应助科研通管家采纳,获得10
15秒前
虚心的如曼完成签到 ,获得积分10
15秒前
xzx发布了新的文献求助10
17秒前
研二发核心完成签到,获得积分10
17秒前
善学以致用应助zzz采纳,获得10
18秒前
19秒前
21秒前
JamesPei应助小柒采纳,获得10
23秒前
打打应助zzy采纳,获得10
23秒前
高分求助中
The Young builders of New china : the visit of the delegation of the WFDY to the Chinese People's Republic 1000
юрские динозавры восточного забайкалья 800
English Wealden Fossils 700
Chen Hansheng: China’s Last Romantic Revolutionary 500
宽禁带半导体紫外光电探测器 388
COSMETIC DERMATOLOGY & SKINCARE PRACTICE 388
Case Research: The Case Writing Process 300
热门求助领域 (近24小时)
化学 医学 生物 材料科学 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 基因 遗传学 催化作用 物理化学 免疫学 量子力学 细胞生物学
热门帖子
关注 科研通微信公众号,转发送积分 3142116
求助须知:如何正确求助?哪些是违规求助? 2793077
关于积分的说明 7805362
捐赠科研通 2449427
什么是DOI,文献DOI怎么找? 1303232
科研通“疑难数据库(出版商)”最低求助积分说明 626807
版权声明 601291