Integrating Multi-Label Contrastive Learning With Dual Adversarial Graph Neural Networks for Cross-Modal Retrieval

计算机科学 人工智能 判别式 机器学习 特征学习 相似性(几何) 图形 模式识别(心理学) 语义学(计算机科学) 代表(政治) 理论计算机科学 图像(数学) 政治 政治学 法学 程序设计语言
作者
Shengsheng Qian,Dizhan Xue,Quan Fang,Changsheng Xu
出处
期刊:IEEE Transactions on Pattern Analysis and Machine Intelligence [Institute of Electrical and Electronics Engineers]
卷期号:45 (4): 1-18 被引量:58
标识
DOI:10.1109/tpami.2022.3188547
摘要

With the growing amount of multimodal data, cross-modal retrieval has attracted more and more attention and become a hot research topic. To date, most of the existing techniques mainly convert multimodal data into a common representation space where similarities in semantics between samples can be easily measured across multiple modalities. However, these approaches may suffer from the following limitations: 1) They overcome the modality gap by introducing loss in the common representation space, which may not be sufficient to eliminate the heterogeneity of various modalities; 2) They treat labels as independent entities and ignore label relationships, which is not conducive to establishing semantic connections across multimodal data; 3) They ignore the non-binary values of label similarity in multi-label scenarios, which may lead to inefficient alignment of representation similarity with label similarity. To tackle these problems, in this article, we propose two models to learn discriminative and modality-invariant representations for cross-modal retrieval. First, the dual generative adversarial networks are built to project multimodal data into a common representation space. Second, to model label relation dependencies and develop inter-dependent classifiers, we employ multi-hop graph neural networks (consisting of Probabilistic GNN and Iterative GNN), where the layer aggregation mechanism is suggested for using propagation information of various hops. Third, we propose a novel soft multi-label contrastive loss for cross-modal retrieval, with the soft positive sampling probability, which can align the representation similarity and the label similarity. Additionally, to adapt to incomplete-modal learning, which can have wider applications, we propose a modal reconstruction mechanism to generate missing features. Extensive experiments on three widely used benchmark datasets, i.e., NUS-WIDE, MIRFlickr, and MS-COCO, show the superiority of our proposed method.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
量子星尘发布了新的文献求助10
1秒前
问夏发布了新的文献求助10
1秒前
Zyw完成签到 ,获得积分10
1秒前
2秒前
乌贼完成签到 ,获得积分10
2秒前
陆驳发布了新的文献求助10
2秒前
暖风sunny完成签到,获得积分10
3秒前
高兴的百褶裙完成签到,获得积分10
3秒前
SciGPT应助无wu采纳,获得10
4秒前
萧萧完成签到,获得积分0
4秒前
5秒前
5秒前
6秒前
深情安青应助机智跳跳糖采纳,获得10
6秒前
LCC发布了新的文献求助10
6秒前
hhllhh发布了新的文献求助10
7秒前
7秒前
7秒前
Zyw关注了科研通微信公众号
8秒前
9秒前
微光熠发布了新的文献求助10
9秒前
称心的水蓉完成签到,获得积分10
9秒前
9秒前
量子星尘发布了新的文献求助10
9秒前
nature榜上发布了新的文献求助10
9秒前
Owen应助人类不宜搞科研采纳,获得10
9秒前
ww完成签到,获得积分10
9秒前
10秒前
10秒前
10秒前
于瑜与余发布了新的文献求助10
12秒前
12秒前
元谷雪发布了新的文献求助10
12秒前
13秒前
14秒前
自然听兰发布了新的文献求助10
14秒前
Jerryis发布了新的文献求助10
15秒前
16秒前
共享精神应助李耀京采纳,获得30
16秒前
16秒前
高分求助中
2025-2031全球及中国金刚石触媒粉行业研究及十五五规划分析报告 12000
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
The Cambridge History of China: Volume 4, Sui and T'ang China, 589–906 AD, Part Two 1000
The Composition and Relative Chronology of Dynasties 16 and 17 in Egypt 1000
Russian Foreign Policy: Change and Continuity 800
Qualitative Data Analysis with NVivo By Jenine Beekhuyzen, Pat Bazeley · 2024 800
Translanguaging in Action in English-Medium Classrooms: A Resource Book for Teachers 700
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 计算机科学 有机化学 物理 生物化学 纳米技术 复合材料 内科学 化学工程 人工智能 催化作用 遗传学 数学 基因 量子力学 物理化学
热门帖子
关注 科研通微信公众号,转发送积分 5695307
求助须知:如何正确求助?哪些是违规求助? 5101268
关于积分的说明 15215811
捐赠科研通 4851665
什么是DOI,文献DOI怎么找? 2602640
邀请新用户注册赠送积分活动 1554296
关于科研通互助平台的介绍 1512277