Integrating Multi-Label Contrastive Learning With Dual Adversarial Graph Neural Networks for Cross-Modal Retrieval

计算机科学 人工智能 判别式 机器学习 特征学习 相似性(几何) 图形 模式识别(心理学) 语义学(计算机科学) 代表(政治) 理论计算机科学 政治学 政治 图像(数学) 程序设计语言 法学
作者
Shengsheng Qian,Dizhan Xue,Quan Fang,Changsheng Xu
出处
期刊:IEEE Transactions on Pattern Analysis and Machine Intelligence [IEEE Computer Society]
卷期号:: 1-18 被引量:26
标识
DOI:10.1109/tpami.2022.3188547
摘要

With the growing amount of multimodal data, cross-modal retrieval has attracted more and more attention and become a hot research topic. To date, most of the existing techniques mainly convert multimodal data into a common representation space where similarities in semantics between samples can be easily measured across multiple modalities. However, these approaches may suffer from the following limitations: 1) They overcome the modality gap by introducing loss in the common representation space, which may not be sufficient to eliminate the heterogeneity of various modalities; 2) They treat labels as independent entities and ignore label relationships, which is not conducive to establishing semantic connections across multimodal data; 3) They ignore the non-binary values of label similarity in multi-label scenarios, which may lead to inefficient alignment of representation similarity with label similarity. To tackle these problems, in this article, we propose two models to learn discriminative and modality-invariant representations for cross-modal retrieval. First, the dual generative adversarial networks are built to project multimodal data into a common representation space. Second, to model label relation dependencies and develop inter-dependent classifiers, we employ multi-hop graph neural networks (consisting of Probabilistic GNN and Iterative GNN), where the layer aggregation mechanism is suggested for using propagation information of various hops. Third, we propose a novel soft multi-label contrastive loss for cross-modal retrieval, with the soft positive sampling probability, which can align the representation similarity and the label similarity. Additionally, to adapt to incomplete-modal learning, which can have wider applications, we propose a modal reconstruction mechanism to generate missing features. Extensive experiments on three widely used benchmark datasets, i.e., NUS-WIDE, MIRFlickr, and MS-COCO, show the superiority of our proposed method.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
cocobear完成签到 ,获得积分10
2秒前
mochalv123完成签到 ,获得积分10
3秒前
科研通AI5应助genova采纳,获得10
4秒前
追寻完成签到 ,获得积分10
6秒前
shann完成签到,获得积分10
8秒前
jiaying完成签到 ,获得积分10
12秒前
13秒前
14秒前
14秒前
even完成签到 ,获得积分0
15秒前
闪闪翼发布了新的文献求助10
20秒前
量子星尘发布了新的文献求助10
20秒前
bookgg完成签到 ,获得积分10
21秒前
海鑫王完成签到,获得积分20
22秒前
之后再说咯完成签到 ,获得积分10
23秒前
23秒前
少女徐必成完成签到 ,获得积分10
28秒前
手握灵珠常奋笔完成签到,获得积分10
29秒前
细心盼晴发布了新的文献求助10
30秒前
肖果完成签到 ,获得积分10
32秒前
sora完成签到,获得积分10
35秒前
闪闪翼完成签到,获得积分10
38秒前
39秒前
阿达完成签到 ,获得积分10
41秒前
苗条世德完成签到,获得积分10
42秒前
我睡觉的时候不困完成签到 ,获得积分10
43秒前
44秒前
genova发布了新的文献求助10
45秒前
57秒前
qyzhu完成签到,获得积分10
59秒前
ty完成签到 ,获得积分10
1分钟前
你的样子发布了新的文献求助10
1分钟前
大个应助林厌寻采纳,获得10
1分钟前
量子星尘发布了新的文献求助10
1分钟前
fjmelite完成签到 ,获得积分10
1分钟前
1分钟前
kkk完成签到 ,获得积分10
1分钟前
Aixia发布了新的文献求助30
1分钟前
苹果柜子完成签到 ,获得积分10
1分钟前
1分钟前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Zeolites: From Fundamentals to Emerging Applications 1500
Architectural Corrosion and Critical Infrastructure 1000
Early Devonian echinoderms from Victoria (Rhombifera, Blastoidea and Ophiocistioidea) 1000
Hidden Generalizations Phonological Opacity in Optimality Theory 1000
2026国自然单细胞多组学大红书申报宝典 800
Real Analysis Theory of Measure and Integration 3rd Edition 500
热门求助领域 (近24小时)
化学 医学 生物 材料科学 工程类 有机化学 内科学 生物化学 物理 计算机科学 纳米技术 遗传学 基因 复合材料 化学工程 物理化学 病理 催化作用 免疫学 量子力学
热门帖子
关注 科研通微信公众号,转发送积分 4910675
求助须知:如何正确求助?哪些是违规求助? 4186400
关于积分的说明 12999471
捐赠科研通 3953927
什么是DOI,文献DOI怎么找? 2168175
邀请新用户注册赠送积分活动 1186604
关于科研通互助平台的介绍 1093845