Integrating Multi-Label Contrastive Learning With Dual Adversarial Graph Neural Networks for Cross-Modal Retrieval

计算机科学 人工智能 判别式 机器学习 特征学习 相似性(几何) 图形 模式识别(心理学) 语义学(计算机科学) 代表(政治) 理论计算机科学 图像(数学) 政治 政治学 法学 程序设计语言
作者
Shengsheng Qian,Dizhan Xue,Quan Fang,Changsheng Xu
出处
期刊:IEEE Transactions on Pattern Analysis and Machine Intelligence [Institute of Electrical and Electronics Engineers]
卷期号:: 1-18 被引量:26
标识
DOI:10.1109/tpami.2022.3188547
摘要

With the growing amount of multimodal data, cross-modal retrieval has attracted more and more attention and become a hot research topic. To date, most of the existing techniques mainly convert multimodal data into a common representation space where similarities in semantics between samples can be easily measured across multiple modalities. However, these approaches may suffer from the following limitations: 1) They overcome the modality gap by introducing loss in the common representation space, which may not be sufficient to eliminate the heterogeneity of various modalities; 2) They treat labels as independent entities and ignore label relationships, which is not conducive to establishing semantic connections across multimodal data; 3) They ignore the non-binary values of label similarity in multi-label scenarios, which may lead to inefficient alignment of representation similarity with label similarity. To tackle these problems, in this article, we propose two models to learn discriminative and modality-invariant representations for cross-modal retrieval. First, the dual generative adversarial networks are built to project multimodal data into a common representation space. Second, to model label relation dependencies and develop inter-dependent classifiers, we employ multi-hop graph neural networks (consisting of Probabilistic GNN and Iterative GNN), where the layer aggregation mechanism is suggested for using propagation information of various hops. Third, we propose a novel soft multi-label contrastive loss for cross-modal retrieval, with the soft positive sampling probability, which can align the representation similarity and the label similarity. Additionally, to adapt to incomplete-modal learning, which can have wider applications, we propose a modal reconstruction mechanism to generate missing features. Extensive experiments on three widely used benchmark datasets, i.e., NUS-WIDE, MIRFlickr, and MS-COCO, show the superiority of our proposed method.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
大幅提高文件上传限制,最高150M (2024-4-1)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
凶狠的树叶完成签到 ,获得积分10
刚刚
houha233发布了新的文献求助10
刚刚
星辰大海应助幽默的寒蕾采纳,获得10
1秒前
苹果雁易完成签到,获得积分10
1秒前
1秒前
科研菜鸡623完成签到 ,获得积分10
1秒前
1秒前
壮壮小狮子完成签到,获得积分10
1秒前
小九发布了新的文献求助10
2秒前
2秒前
领导范儿应助Hoyal_He采纳,获得10
2秒前
我是老大应助优秀不愁采纳,获得10
3秒前
神勇友易完成签到,获得积分10
3秒前
whatever举报科研小白求助涉嫌违规
3秒前
linxi发布了新的文献求助20
3秒前
ssslls完成签到,获得积分10
4秒前
you发布了新的文献求助10
4秒前
FashionBoy应助小蓝采纳,获得20
6秒前
6秒前
showitt完成签到,获得积分10
6秒前
汉堡包应助Archer宇采纳,获得10
6秒前
zxz发布了新的文献求助10
7秒前
CL发布了新的文献求助10
7秒前
7秒前
8秒前
ge完成签到,获得积分10
8秒前
9秒前
9秒前
橘子sungua发布了新的文献求助10
9秒前
9秒前
CodeCraft应助大力的诗蕾采纳,获得10
9秒前
盐kk完成签到 ,获得积分10
11秒前
11秒前
雨竹发布了新的文献求助20
12秒前
12秒前
12秒前
七言完成签到,获得积分10
13秒前
不配.应助科研通管家采纳,获得20
13秒前
13秒前
琉璃苣应助科研通管家采纳,获得10
13秒前
高分求助中
Sustainability in Tides Chemistry 2800
The Young builders of New china : the visit of the delegation of the WFDY to the Chinese People's Republic 1000
юрские динозавры восточного забайкалья 800
Foreign Policy of the French Second Empire: A Bibliography 500
Chen Hansheng: China’s Last Romantic Revolutionary 500
XAFS for Everyone 500
Classics in Total Synthesis IV 400
热门求助领域 (近24小时)
化学 医学 生物 材料科学 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 基因 遗传学 催化作用 物理化学 免疫学 量子力学 细胞生物学
热门帖子
关注 科研通微信公众号,转发送积分 3144482
求助须知:如何正确求助?哪些是违规求助? 2796014
关于积分的说明 7817418
捐赠科研通 2452067
什么是DOI,文献DOI怎么找? 1304867
科研通“疑难数据库(出版商)”最低求助积分说明 627330
版权声明 601432