计算机科学
人工智能
背景(考古学)
相似性(几何)
情态动词
自然语言处理
图像检索
嵌入
文字嵌入
语义相似性
词(群论)
视觉文字
模态(人机交互)
情报检索
注意力网络
过程(计算)
图像(数学)
模式识别(心理学)
数学
化学
高分子化学
古生物学
操作系统
生物
几何学
作者
Qi Zhang,Zhen Lei,Zhaoxiang Zhang,Stan Z. Li
标识
DOI:10.1109/cvpr42600.2020.00359
摘要
As a typical cross-modal problem, image-text bi-directional retrieval relies heavily on the joint embedding learning and similarity measure for each image-text pair. It remains challenging because prior works seldom explore semantic correspondences between modalities and semantic correlations in a single modality at the same time. In this work, we propose a unified Context-Aware Attention Network (CAAN), which selectively focuses on critical local fragments (regions and words) by aggregating the global context. Specifically, it simultaneously utilizes global inter-modal alignments and intra-modal correlations to discover latent semantic relations. Considering the interactions between images and sentences in the retrieval process, intra-modal correlations are derived from the second-order attention of region-word alignments instead of intuitively comparing the distance between original features. Our method achieves fairly competitive results on two generic image-text retrieval datasets Flickr30K and MS-COCO.
科研通智能强力驱动
Strongly Powered by AbleSci AI