计算机科学
变压器
编码器
杠杆(统计)
情态动词
人工智能
模式识别(心理学)
自然语言处理
情报检索
计算机视觉
电压
化学
物理
量子力学
高分子化学
操作系统
作者
Yi Bin,Haoxuan Li,Yang Xu,Xinzheng Xu,Yang Yang,Heng Tao Shen
标识
DOI:10.1145/3581783.3612427
摘要
Most existing cross-modal retrieval methods employ two-stream encoders with different architectures for images and texts, e.g., CNN for images and RNN/Transformer for texts. Such discrepancy in architectures may induce different semantic distribution spaces and limit the interactions between images and texts, and further result in inferior alignment between images and texts. To fill this research gap, inspired by recent advances of Transformers in vision tasks, we propose to unify the encoder architectures with Transformers for both modalities. Specifically, we design a cross-modal retrieval framework purely based on two-stream Transformers, dubbed Hierarchical Alignment Transformers (HAT), which consists of an image Transformer, a text Transformer, and a hierarchical alignment module. With such identical architectures, the encoders could produce representations with more similar characteristics for images and texts, and make the interactions and alignments between them much easier. Besides, to leverage the rich semantics, we devise a hierarchical alignment scheme to explore multi-level correspondences of different layers between images and texts. To evaluate the effectiveness of the proposed HAT, we conduct extensive experiments on two benchmark datasets, MSCOCO and Flickr30K. Experimental results demonstrate that HAT outperforms SOTA baselines by a large margin. Specifically, on two key tasks, i.e., image-to-text and text-to-image retrieval, HAT achieves 7.6% and 16.7% relative score improvement of Recall@1 on MSCOCO, and 4.4% and 11.6% on Flickr30k respectively. The code is available at https://github.com/LuminosityX/HAT.
科研通智能强力驱动
Strongly Powered by AbleSci AI