Cross-Modal Adaptive Dual Association for Text-to-Image Person Retrieval

计算机科学 联想(心理学) 对偶(语法数字) 模态(人机交互) 模式 人工智能 鉴定(生物学) 特征(语言学) 图像(数学) 失真(音乐) 钥匙(锁) 情态动词 情报检索 模式识别(心理学) 自然语言处理 语言学 化学 计算机网络 社会科学 计算机安全 带宽(计算) 放大器 高分子化学 认识论 社会学 植物 哲学 生物
作者
D. M. Lin,Yi-Xing Peng,Jingke Meng,Wei‐Shi Zheng
出处
期刊:IEEE Transactions on Multimedia [Institute of Electrical and Electronics Engineers]
卷期号:26: 6609-6620 被引量:24
标识
DOI:10.1109/tmm.2024.3355644
摘要

Text-to-image person re-identification (ReID) aims to retrieve images of a person based on a given textual description. The key challenge is to learn the relations between detailed information from visual and textual modalities. Existing work focuses on learning a latent space to narrow the modality gap and further build local correspondences between two modalities. However, these methods assume that image-to-text and text-to-image associations are modality-agnostic, resulting in suboptimal associations. In this work, we demonstrate the discrepancy between image-to-text association and text-to-image association and proposecross-modal adaptive dual association (CADA) to build fine bidirectional image-text detailed associations. Our approach features a decoder-based adaptive dual association module that enables full interaction between visual and textual modalities, enabling bidirectional and adaptive cross-modal correspondence associations. Specifically, this paper proposes a bidirectional association mechanism: Association of text Tokens to image Patches (ATP) and Association of image Regions to text Attributes (ARA). We adaptively model the ATP based on the fact that aggregating cross-modal features based on mistaken associations will lead to feature distortion. For modeling the ARA, since attributes are typically the first distinguishing cues of a person, we explore attribute-level associations by predicting the masked text phrase using the related image region. Finally, we learn the dual associations between texts and images, and the experimental results demonstrate the superiority of our dual formulation. The code used in this article will be made publicly available at https://github.com/LinDixuan/CADA .
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
刚刚
刚刚
刚刚
1秒前
1秒前
2秒前
2秒前
2秒前
吕吕吕发布了新的文献求助10
2秒前
2秒前
2秒前
3秒前
百里怀蕊发布了新的文献求助200
3秒前
3秒前
科目三应助自由冬天采纳,获得10
3秒前
3秒前
Akim应助小菜狗采纳,获得10
4秒前
4秒前
微笑的钧完成签到,获得积分10
4秒前
1點點cui发布了新的文献求助10
4秒前
zhang发布了新的文献求助10
5秒前
5秒前
6秒前
张萌发布了新的文献求助20
6秒前
6秒前
6秒前
CipherSage应助mdjinij采纳,获得10
6秒前
7秒前
明亮尔蓝应助liuliu采纳,获得10
7秒前
8秒前
Olivia完成签到,获得积分10
8秒前
李佳慧完成签到,获得积分10
8秒前
万能二氯完成签到,获得积分10
8秒前
星启发布了新的文献求助10
8秒前
Adler完成签到,获得积分10
8秒前
FF完成签到,获得积分10
8秒前
小马完成签到 ,获得积分10
9秒前
9秒前
orixero应助H-kevin.采纳,获得10
9秒前
CCc发布了新的文献求助10
9秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Kinesiophobia : a new view of chronic pain behavior 2000
Research for Social Workers 1000
Psychology and Work Today 800
Mastering New Drug Applications: A Step-by-Step Guide (Mastering the FDA Approval Process Book 1) 800
Kinesiophobia : a new view of chronic pain behavior 600
Signals, Systems, and Signal Processing 510
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 计算机科学 有机化学 物理 生物化学 纳米技术 复合材料 内科学 化学工程 人工智能 催化作用 遗传学 数学 基因 量子力学 物理化学
热门帖子
关注 科研通微信公众号,转发送积分 5896718
求助须知:如何正确求助?哪些是违规求助? 6712271
关于积分的说明 15735218
捐赠科研通 5019244
什么是DOI,文献DOI怎么找? 2702929
邀请新用户注册赠送积分活动 1649710
关于科研通互助平台的介绍 1598738