亲爱的研友该休息了!由于当前在线用户较少,发布求助请尽量完整地填写文献信息,科研通机器人24小时在线,伴您度过漫漫科研夜!身体可是革命的本钱,早点休息,好梦!

Text-to-Image Person Re-Identification Based on Multimodal Graph Convolutional Network

计算机科学 人工智能 图形 鉴定(生物学) 卷积神经网络 模式识别(心理学) 自然语言处理 情报检索 理论计算机科学 植物 生物
作者
Guang Han,Min Lin,Ziyang Li,Haitao Zhao,Sam Kwong
出处
期刊:IEEE Transactions on Multimedia [Institute of Electrical and Electronics Engineers]
卷期号:26: 6025-6036 被引量:17
标识
DOI:10.1109/tmm.2023.3344354
摘要

Text-to-image person re-identification (ReID) is a common subproblem in the field of person re-identification and image-text retrieval. Recent approaches generally follow the structure of a dual-stream network, extracting image and text features. There is no deep interaction between images and text in this approach, making it difficult for the network to learn a highly semantic feature representation. In addition, for both image data and text data, the feature extraction process is modeled in a regular way, such as using Transformer to extract sequence embeddings. However, this type of modeling disregards the inherent relationships among multimodal input embeddings. A more flexible approach to mining multimodal data, which uniformly treats the data as graphs, is proposed. In this way, the extraction and interaction of multimodal information are accomplished by means of messages passing between graph nodes. First, a unified multimodal feature extraction and fusion network is proposed based on the graph convolutional network, which enables the progression of multimodal information from 'local' to 'global'. Second, an asymmetric multilevel alignment module, which focuses on more accurate 'local' information from a 'global' perspective, is proposed to progressively divide the multimodal information at each level. Last, a cross-modal representation matching strategy based on similarity distribution and mutual information is proposed to achieve cross-modal alignment. The proposed algorithm in this paper is simple and efficient, and the testing results on three public datasets (CUHK-PEDES, ICFG-PEDES and RSTPReID) show that it can achieve SOTA-level performance.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
徐zhipei完成签到 ,获得积分10
刚刚
2秒前
小蘑菇应助谦让盼海采纳,获得10
2秒前
星辰大海应助科研通管家采纳,获得10
2秒前
我是老大应助科研通管家采纳,获得10
2秒前
2秒前
李健应助科研通管家采纳,获得10
2秒前
2秒前
2秒前
情怀应助科研通管家采纳,获得10
2秒前
2秒前
4秒前
专注花卷发布了新的文献求助10
6秒前
烂漫的茈发布了新的文献求助10
8秒前
16秒前
yangbinsci0827完成签到,获得积分10
17秒前
Lucas应助多情雨灵采纳,获得10
17秒前
无花果应助oosr采纳,获得10
18秒前
刘安娜完成签到,获得积分20
18秒前
丘比特应助烂漫的茈采纳,获得10
19秒前
敬业乐群完成签到,获得积分10
19秒前
20秒前
dd123发布了新的文献求助10
22秒前
22秒前
天天快乐应助专注花卷采纳,获得10
24秒前
儒雅如萱发布了新的文献求助10
27秒前
桐桐应助香蕉冥王星采纳,获得10
28秒前
梦丽有人完成签到,获得积分10
28秒前
yipmyonphu完成签到,获得积分10
30秒前
30秒前
痴情的银耳汤完成签到 ,获得积分10
31秒前
Akim应助可靠的老鼠采纳,获得10
31秒前
simon完成签到 ,获得积分10
32秒前
多情雨灵发布了新的文献求助10
34秒前
36秒前
缓慢的三颜完成签到,获得积分10
37秒前
37秒前
38秒前
41秒前
oosr发布了新的文献求助10
42秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Handbook of pharmaceutical excipients, Ninth edition 5000
Aerospace Standards Index - 2026 ASIN2026 2000
Digital Twins of Advanced Materials Processing 2000
晋绥日报合订本24册(影印本1986年)【1940年9月–1949年5月】 1000
Social Cognition: Understanding People and Events 1000
Polymorphism and polytypism in crystals 1000
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 纳米技术 有机化学 物理 生物化学 化学工程 计算机科学 复合材料 内科学 催化作用 光电子学 物理化学 电极 冶金 遗传学 细胞生物学
热门帖子
关注 科研通微信公众号,转发送积分 6033835
求助须知:如何正确求助?哪些是违规求助? 7731486
关于积分的说明 16204812
捐赠科研通 5180459
什么是DOI,文献DOI怎么找? 2772359
邀请新用户注册赠送积分活动 1755570
关于科研通互助平台的介绍 1640376