Enhancing Visual Grounding in Vision-Language Pre-Training With Position-Guided Text Prompts

计算机科学 人工智能 块(置换群论) 接地 判决 任务(项目管理) 对象(语法) 任务分析 计算机视觉 模式识别(心理学) 自然语言处理 工程类 数学 几何学 电气工程 系统工程
作者
Jinpeng Wang,Pan Zhou,Mike Zheng Shou,Shuicheng Yan
出处
期刊:IEEE Transactions on Pattern Analysis and Machine Intelligence [IEEE Computer Society]
卷期号:46 (5): 3406-3421 被引量:3
标识
DOI:10.1109/tpami.2023.3343736
摘要

Vision-Language Pre-Training (VLP) has demonstrated remarkable potential in aligning image and text pairs, paving the way for a wide range of cross-modal learning tasks. Nevertheless, we have observed that VLP models often fall short in terms of visual grounding and localization capabilities, which are crucial for many downstream tasks, such as visual reasoning. In response, we introduce a novel Position-guided Text Prompt ( PTP ) paradigm to bolster the visual grounding abilities of cross-modal models trained with VLP. In the VLP phase, PTP divides an image into N x N blocks and employs a widely-used object detector to identify objects within each block. PTP then reframes the visual grounding task as a fill-in-the-blank problem, encouraging the model to predict objects in given blocks or regress the blocks of a given object, exemplified by filling " [P] " or " [O] " in a PTP sentence such as " The block [P] has a [O]. " This strategy enhances the visual grounding capabilities of VLP models, enabling them to better tackle various downstream tasks. Additionally, we integrate the seconda-order relationships between objects to further enhance the visual grounding capabilities of our proposed PTP paradigm. Incorporating PTP into several state-of-the-art VLP frameworks leads to consistently significant improvements across representative cross-modal learning model architectures and multiple benchmarks, such as zero-shot Flickr30k Retrieval (+5.6 in average recall@1) for ViLT baseline, and COCO Captioning (+5.5 in CIDEr) for the state-of-the-art BLIP baseline. Furthermore, PTP attains comparable results with object-detector-based methods and a faster inference speed, as it discards its object detector during inference, unlike other approaches. Our code and pre-trained models are available at https://github.com/sail-sg/ptp .
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
1秒前
1秒前
无花果应助hjs采纳,获得10
1秒前
6ackpack发布了新的文献求助10
4秒前
希望天下0贩的0应助墨客采纳,获得10
4秒前
清脆苑博完成签到,获得积分10
5秒前
cc发布了新的文献求助20
5秒前
7秒前
PAIDAXXXX发布了新的文献求助10
7秒前
秋半梦完成签到,获得积分10
7秒前
8秒前
9377完成签到 ,获得积分10
9秒前
不吃香菜完成签到,获得积分20
9秒前
曾经的康乃馨完成签到 ,获得积分10
9秒前
完美世界应助CHITAIBAO采纳,获得10
10秒前
10秒前
10秒前
12秒前
善学以致用应助sugar采纳,获得10
12秒前
求知的土拨鼠完成签到,获得积分10
13秒前
wo完成签到 ,获得积分10
14秒前
大模型应助明理的蓝采纳,获得10
14秒前
14秒前
oyjq发布了新的文献求助10
15秒前
完美世界应助lw采纳,获得10
15秒前
苹果寻菱发布了新的文献求助30
16秒前
潇洒的浩然完成签到,获得积分10
16秒前
16秒前
幽兰发布了新的文献求助20
17秒前
laohu2完成签到,获得积分10
18秒前
19秒前
追寻的丹烟完成签到,获得积分10
20秒前
20秒前
laohu2发布了新的文献求助30
20秒前
酷波er应助标致的如娆采纳,获得10
21秒前
oyjq完成签到,获得积分10
21秒前
21秒前
21秒前
21秒前
高大代容发布了新的文献求助10
21秒前
高分求助中
【此为提示信息,请勿应助】请按要求发布求助,避免被关 20000
All the Birds of the World 4000
Production Logging: Theoretical and Interpretive Elements 3000
Musculoskeletal Pain - Market Insight, Epidemiology And Market Forecast - 2034 2000
Am Rande der Geschichte : mein Leben in China / Ruth Weiss 1500
CENTRAL BOOKS: A BRIEF HISTORY 1939 TO 1999 by Dave Cope 1000
Munson, Young, and Okiishi’s Fundamentals of Fluid Mechanics 9 edition problem solution manual (metric) 800
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 物理 生物化学 纳米技术 计算机科学 化学工程 内科学 复合材料 物理化学 电极 遗传学 量子力学 基因 冶金 催化作用
热门帖子
关注 科研通微信公众号,转发送积分 3748456
求助须知:如何正确求助?哪些是违规求助? 3291468
关于积分的说明 10073184
捐赠科研通 3007264
什么是DOI,文献DOI怎么找? 1651526
邀请新用户注册赠送积分活动 786444
科研通“疑难数据库(出版商)”最低求助积分说明 751742