计算机科学
情报检索
样品(材料)
利用
钥匙(锁)
图像检索
代表(政治)
图像(数学)
匹配(统计)
数据挖掘
补语(音乐)
情态动词
人工智能
数学
基因
统计
政治
生物化学
表型
色谱法
计算机安全
化学
高分子化学
互补
法学
政治学
作者
Feifei Zhang,Ming Yan,Ji Zhang,Changsheng Xu
标识
DOI:10.1145/3503161.3548126
摘要
Composed Query Based Image Retrieval (CQBIR) aims at searching images relevant to a composed query, i.e., a reference image together with a modifier text. Compared with conventional image retrieval, which takes a single image or text to retrieve desired images, CQBIR encounters more challenges as it requires not only effective semantic correspondence between the heterogeneous query and target, but also synergistic understanding of the composed query. To establish robust CQBIR model, four critical types of relational information can be included, i.e., cross-modal, intra-sample, inter-sample, and cross-sample relationships. Pioneer studies mainly exploit parts of the information, which are hard to make them enhance and complement each other. In this paper, we propose a comprehensive relationship reasoning network by fully exploring the four types of information for CQBIR, which mainly includes two key designs. First, we introduce a memory-augmented cross-modal attention module, in which the representation of the composed query is augmented by considering the cross-modal relationship between the reference image and the modification text. Second, we design a multi-scale matching strategy to optimize our network, aiming at harnessing information from the intra-sample, inter-sample, and cross-sample relationships. To the best of our knowledge, this is the first work to fully explore the four pieces of relationships in a unified deep model for CQBIR. Comprehensive experimental results on five standard benchmarks demonstrate that the proposed method performs favorably against state-of-the-art models.
科研通智能强力驱动
Strongly Powered by AbleSci AI