计算机科学
加权
人工智能
特征学习
嵌入
杠杆(统计)
聚类分析
情态动词
代表(政治)
图像(数学)
模式
图像检索
图形
模式识别(心理学)
自然语言处理
理论计算机科学
医学
社会科学
化学
社会学
政治
政治学
高分子化学
法学
放射科
作者
Haoran Wang,Dongliang He,Wenhao Wu,Boyang Xia,Min Yang,Fu Li,Yunlong Yu,Zhong Ji,Errui Ding,Jingdong Wang
标识
DOI:10.1007/978-3-031-20059-5_40
摘要
Image-Text Retrieval (ITR) is challenging in bridging visual and lingual modalities. Contrastive learning has been adopted by most prior arts. Except for limited amount of negative image-text pairs, the capability of constrastive learning is restricted by manually weighting negative pairs as well as unawareness of external knowledge. In this paper, we propose our novel Coupled Diversity-Sensitive Momentum Constrastive Learning (CODER) for improving cross-modal representation. Firstly, a novel diversity-sensitive contrastive learning (DCL) architecture is invented. We introduce dynamic dictionaries for both modalities to enlarge the scale of image-text pairs, and diversity-sensitiveness is achieved by adaptive negative pair weighting. Furthermore, two branches are designed in CODER. One learns instance-level embeddings from image/text, and it also generates pseudo online clustering labels for its input image/text based on their embeddings. Meanwhile, the other branch learns to query from commonsense knowledge graph to form concept-level descriptors for both modalities. Afterwards, both branches leverage DCL to align the cross-modal embedding spaces while an extra pseudo clustering label prediction loss is utilized to promote concept-level representation learning for the second branch. Extensive experiments conducted on two popular benchmarks, i.e. MSCOCO and Flicker30K, validate CODER remarkably outperforms the state-of-the-art approaches.
科研通智能强力驱动
Strongly Powered by AbleSci AI