计算机科学
图像检索
嵌入
变压器
人工智能
查询扩展
编码器
模式识别(心理学)
计算机视觉
图像(数学)
情报检索
物理
量子力学
电压
操作系统
作者
Yahui Xu,Yi Bin,Jiwei Wei,Yang Yang,Guoqing Wang,Heng Tao Shen
标识
DOI:10.1109/tmm.2023.3235495
摘要
In this paper, we study the composed query image retrieval, which aims at retrieving the target image similar to the composed query, i.e., a reference image and the desired modification text. Compared with conventional image retrieval, this task is more challenging as it not only requires precisely aligning the composed query and target image in a common embedding space, but also simultaneously extracting related information from the reference image and modification text. In order to properly extract related information from the composed query, existing methods usually embed vision-language inputs using different feature encoders, e.g., CNN for images and LSTM/BERT for text, and then employ a complicated manually-designed composition module for learning the joint image-text representation. However, the architecture discrepancy in feature encoders would restrict the vision-language plenitudinous interaction. Meanwhile, certain complicated composition designs might significantly hamper the generalization ability of the model. To tackle these problems, we propose a new framework termed ComqueryFormer, which effectively processes the composed query with the Transformer for this task. Specifically, to eliminate the architecture discrepancy, we leverage a unified transformer-based architecture to homogeneously encode the vision-language inputs. Meanwhile, instead of the complicated composition module, the neat yet effective cross-modal transformer is adopted to hierarchically fuse the composed query at various vision scales. On the other hand, we introduce an efficient global-local alignment module to narrow the distance between the composed query and the target image. It not only considers the divergence in the global joint embedding space but also forces the model to focus on the local detail differences. Extensive experiments on three real-world datasets demonstrate the superiority of our ComqueryFormer. Our code can be found at: https://github.com/uestc-xyh/ComqueryFormer .
科研通智能强力驱动
Strongly Powered by AbleSci AI