计算机科学
隐藏字幕
图像(数学)
集合(抽象数据类型)
自然语言
语言模型
图像检索
人工智能
方案(数学)
噪音(视频)
情报检索
自然语言处理
数学
数学分析
程序设计语言
作者
Yang Bai,Jingyao Wang,Min Cao,Chen Chen,Ziqiang Cao,Liqiang Nie,Min Zhang
标识
DOI:10.1145/3581783.3612285
摘要
Text-based person search (TBPS) aims to retrieve the images of the target person from a large image gallery based on a given natural language description. Existing methods are dominated by training models with parallel image-text pairs, which are very costly to collect. In this paper, we make the first attempt to explore TBPS without parallel image-text data (μ-TBPS), in which only non-parallel images and texts, or even image-only data, can be adopted. Towards this end, we propose a two-stage framework, generation-then-retrieval (GTR), to first generate the corresponding pseudo text for each image and then perform the retrieval in a supervised manner. In the generation stage, we propose a fine-grained image captioning strategy to obtain an enriched description of the person image, which firstly utilizes a set of instruction prompts to activate the off-the-shelf pretrained vision-language model to capture and generate fine-grained person attributes, and then converts the extracted attributes into a textual description via the finetuned large language model or the hand-crafted template. In the retrieval stage, considering the noise interference of the generated texts for training model, we develop a confidence score-based training scheme by enabling more reliable texts to contribute more during the training. Experimental results on multiple TBPS benchmarks (i.e., CUHK-PEDES, ICFG-PEDES and RSTPReid) show that the proposed GTR can achieve a promising performance without relying on parallel image-text data.
科研通智能强力驱动
Strongly Powered by AbleSci AI