Hire: Hybrid-modal Interaction with Multiple Relational Enhancements for Image-Text Matching

情态动词 匹配(统计) 图像(数学) 计算机科学 人工智能 情报检索 模式识别(心理学) 数学 化学 统计 高分子化学
作者
Xuri Ge,Fuhai Chen,Songpei Xu,Fuxiang Tao,Jie Wang,Joemon M. Jose
出处
期刊:Cornell University - arXiv
标识
DOI:10.48550/arxiv.2406.18579
摘要

Image-text matching (ITM) is a fundamental problem in computer vision. The key issue lies in jointly learning the visual and textual representation to estimate their similarity accurately. Most existing methods focus on feature enhancement within modality or feature interaction across modalities, which, however, neglects the contextual information of the object representation based on the inter-object relationships that match the corresponding sentences with rich contextual semantics. In this paper, we propose a Hybrid-modal Interaction with multiple Relational Enhancements (termed \textit{Hire}) for image-text matching, which correlates the intra- and inter-modal semantics between objects and words with implicit and explicit relationship modelling. In particular, the explicit intra-modal spatial-semantic graph-based reasoning network is designed to improve the contextual representation of visual objects with salient spatial and semantic relational connectivities, guided by the explicit relationships of the objects' spatial positions and their scene graph. We use implicit relationship modelling for potential relationship interactions before explicit modelling to improve the fault tolerance of explicit relationship detection. Then the visual and textual semantic representations are refined jointly via inter-modal interactive attention and cross-modal alignment. To correlate the context of objects with the textual context, we further refine the visual semantic representation via cross-level object-sentence and word-image-based interactive attention. Extensive experiments validate that the proposed hybrid-modal interaction with implicit and explicit modelling is more beneficial for image-text matching. And the proposed \textit{Hire} obtains new state-of-the-art results on MS-COCO and Flickr30K benchmarks.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
hh关注了科研通微信公众号
刚刚
刚刚
recovery发布了新的文献求助10
1秒前
1秒前
李健应助qsh采纳,获得10
2秒前
2秒前
慕青应助孤独的匕采纳,获得10
2秒前
2秒前
量子星尘发布了新的文献求助10
2秒前
酷波er应助Dexter采纳,获得10
3秒前
passion完成签到,获得积分10
3秒前
唐同学完成签到,获得积分10
3秒前
4秒前
5秒前
5秒前
6秒前
6秒前
123发布了新的文献求助10
6秒前
7秒前
7秒前
hif1a发布了新的文献求助10
7秒前
7秒前
7秒前
李白发布了新的文献求助10
7秒前
Profeto关注了科研通微信公众号
8秒前
雪落发布了新的文献求助10
8秒前
pluto应助爽哥采纳,获得50
8秒前
白三问发布了新的文献求助10
9秒前
liuxshan完成签到,获得积分10
9秒前
9秒前
9秒前
lllsssqqq完成签到,获得积分10
10秒前
帅气灯泡完成签到,获得积分10
10秒前
10秒前
能干雁凡完成签到,获得积分10
10秒前
11秒前
传统的宝莹完成签到,获得积分10
11秒前
11秒前
天天快乐应助宁子采纳,获得10
11秒前
chensihao完成签到,获得积分20
11秒前
高分求助中
Ophthalmic Equipment Market by Devices(surgical: vitreorentinal,IOLs,OVDs,contact lens,RGP lens,backflush,diagnostic&monitoring:OCT,actorefractor,keratometer,tonometer,ophthalmoscpe,OVD), End User,Buying Criteria-Global Forecast to2029 2000
A new approach to the extrapolation of accelerated life test data 1000
Cognitive Neuroscience: The Biology of the Mind 1000
Cognitive Neuroscience: The Biology of the Mind (Sixth Edition) 1000
ACSM’s Guidelines for Exercise Testing and Prescription, 12th edition 588
Christian Women in Chinese Society: The Anglican Story 500
A Preliminary Study on Correlation Between Independent Components of Facial Thermal Images and Subjective Assessment of Chronic Stress 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 遗传学 基因 物理化学 催化作用 冶金 细胞生物学 免疫学
热门帖子
关注 科研通微信公众号,转发送积分 3961496
求助须知:如何正确求助?哪些是违规求助? 3507837
关于积分的说明 11138394
捐赠科研通 3240311
什么是DOI,文献DOI怎么找? 1790903
邀请新用户注册赠送积分活动 872636
科研通“疑难数据库(出版商)”最低求助积分说明 803288