Cross-modality synergy network for referring expression comprehension and segmentation

计算机科学 模态(人机交互) 特征(语言学) 参照物 人工智能 表达式(计算机科学) 模式 自然语言处理 自然语言 代表(政治) 相似性(几何) 图像(数学) 语言学 政治 哲学 社会学 政治学 程序设计语言 法学 社会科学
作者
Qian‐Zhong Li,Yujia Zhang,Shiying Sun,Jinting Wu,Xiaoguang Zhao,Min Tan
出处
期刊:Neurocomputing [Elsevier BV]
卷期号:467: 99-114 被引量:17
标识
DOI:10.1016/j.neucom.2021.09.066
摘要

Referring expression comprehension and segmentation aim to locate and segment a referred instance in an image according to a natural language expression. However, existing methods tend to ignore the interaction between visual and language modalities for visual feature learning, and establishing a synergy between the visual and language modalities remains a considerable challenge. To tackle the above problems, we propose a novel end-to-end framework, Cross-Modality Synergy Network (CMS-Net), to address the two tasks jointly. In this work, we propose an attention-aware representation learning module to learn modal representations for both images and expressions. A language self-attention submodule is proposed in this module to learn expression representations by leveraging the intra-modality relations, and a language-guided channel-spatial attention submodule is introduced to obtain the language-aware visual representations under language guidance, which helps the model pay more attention to the referent-relevant regions in the images and relieve background interference. Then, we design a cross-modality synergy module to establish the inter-modality relations for modality fusion. Specifically, a language-visual similarity is obtained at each position of the visual feature map, and the synergy is achieved between the two modalities in both semantic and spatial dimensions. Furthermore, we propose a multi-scale feature fusion module with a selective strategy to aggregate the important information from multi-scale features, yielding target results. We conduct extensive experiments on four challenging benchmarks, and our framework achieves significant performance gains over state-of-the-art methods.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
量子星尘发布了新的文献求助10
2秒前
3秒前
852应助平常平露采纳,获得10
3秒前
李爱国应助zjx采纳,获得10
3秒前
乐乐应助刘66采纳,获得10
4秒前
5秒前
6秒前
香蕉觅云应助南宫若翠采纳,获得10
6秒前
7秒前
7秒前
7秒前
共享精神应助会飞的猪采纳,获得10
7秒前
高万发布了新的文献求助10
10秒前
kingjames完成签到,获得积分10
10秒前
李健应助冷静的奇迹采纳,获得10
12秒前
研友_LNow6n发布了新的文献求助30
13秒前
沙一汀绯闻女友完成签到,获得积分10
14秒前
YP_024完成签到,获得积分10
14秒前
15秒前
15秒前
17秒前
17秒前
17秒前
lxp完成签到,获得积分10
18秒前
苏苏发布了新的文献求助10
18秒前
18秒前
bkagyin应助流光采纳,获得10
18秒前
整齐的不评完成签到,获得积分10
19秒前
20秒前
20秒前
lxp发布了新的文献求助10
20秒前
顾矜应助下雨天爱吃鱼采纳,获得10
20秒前
会飞的猪发布了新的文献求助10
21秒前
东方越彬发布了新的文献求助20
21秒前
21秒前
可可发布了新的文献求助10
23秒前
英俊的铭应助科研通管家采纳,获得10
23秒前
隐形曼青应助科研通管家采纳,获得10
23秒前
小蘑菇应助科研通管家采纳,获得10
23秒前
科研通AI2S应助科研通管家采纳,获得10
23秒前
高分求助中
The Mother of All Tableaux Order, Equivalence, and Geometry in the Large-scale Structure of Optimality Theory 2400
Ophthalmic Equipment Market by Devices(surgical: vitreorentinal,IOLs,OVDs,contact lens,RGP lens,backflush,diagnostic&monitoring:OCT,actorefractor,keratometer,tonometer,ophthalmoscpe,OVD), End User,Buying Criteria-Global Forecast to2029 2000
Optimal Transport: A Comprehensive Introduction to Modeling, Analysis, Simulation, Applications 800
Official Methods of Analysis of AOAC INTERNATIONAL 600
ACSM’s Guidelines for Exercise Testing and Prescription, 12th edition 588
T/CIET 1202-2025 可吸收再生氧化纤维素止血材料 500
Interpretation of Mass Spectra, Fourth Edition 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 遗传学 基因 物理化学 催化作用 冶金 细胞生物学 免疫学
热门帖子
关注 科研通微信公众号,转发送积分 3956520
求助须知:如何正确求助?哪些是违规求助? 3502600
关于积分的说明 11109235
捐赠科研通 3233391
什么是DOI,文献DOI怎么找? 1787343
邀请新用户注册赠送积分活动 870607
科研通“疑难数据库(出版商)”最低求助积分说明 802123