亲爱的研友该休息了!由于当前在线用户较少,发布求助请尽量完整地填写文献信息,科研通机器人24小时在线,伴您度过漫漫科研夜!身体可是革命的本钱,早点休息,好梦!

Integrating Multi-Label Contrastive Learning With Dual Adversarial Graph Neural Networks for Cross-Modal Retrieval

计算机科学 人工智能 判别式 机器学习 特征学习 相似性(几何) 图形 模式识别(心理学) 语义学(计算机科学) 代表(政治) 理论计算机科学 图像(数学) 政治 政治学 法学 程序设计语言
作者
Shengsheng Qian,Dizhan Xue,Quan Fang,Changsheng Xu
出处
期刊:IEEE Transactions on Pattern Analysis and Machine Intelligence [IEEE Computer Society]
卷期号:45 (4): 1-18 被引量:58
标识
DOI:10.1109/tpami.2022.3188547
摘要

With the growing amount of multimodal data, cross-modal retrieval has attracted more and more attention and become a hot research topic. To date, most of the existing techniques mainly convert multimodal data into a common representation space where similarities in semantics between samples can be easily measured across multiple modalities. However, these approaches may suffer from the following limitations: 1) They overcome the modality gap by introducing loss in the common representation space, which may not be sufficient to eliminate the heterogeneity of various modalities; 2) They treat labels as independent entities and ignore label relationships, which is not conducive to establishing semantic connections across multimodal data; 3) They ignore the non-binary values of label similarity in multi-label scenarios, which may lead to inefficient alignment of representation similarity with label similarity. To tackle these problems, in this article, we propose two models to learn discriminative and modality-invariant representations for cross-modal retrieval. First, the dual generative adversarial networks are built to project multimodal data into a common representation space. Second, to model label relation dependencies and develop inter-dependent classifiers, we employ multi-hop graph neural networks (consisting of Probabilistic GNN and Iterative GNN), where the layer aggregation mechanism is suggested for using propagation information of various hops. Third, we propose a novel soft multi-label contrastive loss for cross-modal retrieval, with the soft positive sampling probability, which can align the representation similarity and the label similarity. Additionally, to adapt to incomplete-modal learning, which can have wider applications, we propose a modal reconstruction mechanism to generate missing features. Extensive experiments on three widely used benchmark datasets, i.e., NUS-WIDE, MIRFlickr, and MS-COCO, show the superiority of our proposed method.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
科目三应助爱听歌笑柳采纳,获得10
2秒前
斯文的初蝶完成签到,获得积分10
22秒前
23秒前
28秒前
39秒前
42秒前
46秒前
46秒前
研时友发布了新的文献求助10
50秒前
53秒前
科研通AI2S应助科研通管家采纳,获得10
1分钟前
1分钟前
1分钟前
慕青应助研时友采纳,获得10
1分钟前
zqq完成签到,获得积分0
1分钟前
袁建波完成签到 ,获得积分10
1分钟前
烟花应助顷梦采纳,获得10
1分钟前
今后应助顷梦采纳,获得10
1分钟前
鱼鱼完成签到,获得积分10
1分钟前
小王同学完成签到,获得积分10
1分钟前
量子星尘发布了新的文献求助10
2分钟前
Marina发布了新的文献求助20
2分钟前
在水一方应助153采纳,获得50
2分钟前
zzz发布了新的文献求助10
2分钟前
休斯顿完成签到,获得积分10
2分钟前
2分钟前
jia完成签到 ,获得积分10
2分钟前
2分钟前
153发布了新的文献求助50
2分钟前
3分钟前
mak1ma发布了新的文献求助10
3分钟前
3分钟前
3分钟前
光亮静槐完成签到 ,获得积分10
3分钟前
小二郎应助ff采纳,获得10
3分钟前
Marina完成签到,获得积分10
3分钟前
Woo_SH完成签到,获得积分10
4分钟前
完美世界应助海洋球采纳,获得10
4分钟前
汉堡包应助azure采纳,获得10
4分钟前
吕佩发布了新的文献求助10
4分钟前
高分求助中
Hope Teacher Rating Scale 1000
Entre Praga y Madrid: los contactos checoslovaco-españoles (1948-1977) 1000
Polymorphism and polytypism in crystals 1000
Encyclopedia of Materials: Plastics and Polymers 800
Signals, Systems, and Signal Processing 610
Discrete-Time Signals and Systems 610
Death Without End: Korea and the Thanatographics of War 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 纳米技术 有机化学 物理 生物化学 化学工程 计算机科学 复合材料 内科学 催化作用 光电子学 物理化学 电极 冶金 遗传学 细胞生物学
热门帖子
关注 科研通微信公众号,转发送积分 6094181
求助须知:如何正确求助?哪些是违规求助? 7924134
关于积分的说明 16405036
捐赠科研通 5225349
什么是DOI,文献DOI怎么找? 2793109
邀请新用户注册赠送积分活动 1775756
关于科研通互助平台的介绍 1650268