Integrating Multi-Label Contrastive Learning With Dual Adversarial Graph Neural Networks for Cross-Modal Retrieval

计算机科学 人工智能 判别式 机器学习 特征学习 相似性(几何) 图形 模式识别(心理学) 语义学(计算机科学) 代表(政治) 理论计算机科学 图像(数学) 政治 政治学 法学 程序设计语言
作者
Shengsheng Qian,Dizhan Xue,Quan Fang,Changsheng Xu
出处
期刊:IEEE Transactions on Pattern Analysis and Machine Intelligence [Institute of Electrical and Electronics Engineers]
卷期号:45 (4): 1-18 被引量:58
标识
DOI:10.1109/tpami.2022.3188547
摘要

With the growing amount of multimodal data, cross-modal retrieval has attracted more and more attention and become a hot research topic. To date, most of the existing techniques mainly convert multimodal data into a common representation space where similarities in semantics between samples can be easily measured across multiple modalities. However, these approaches may suffer from the following limitations: 1) They overcome the modality gap by introducing loss in the common representation space, which may not be sufficient to eliminate the heterogeneity of various modalities; 2) They treat labels as independent entities and ignore label relationships, which is not conducive to establishing semantic connections across multimodal data; 3) They ignore the non-binary values of label similarity in multi-label scenarios, which may lead to inefficient alignment of representation similarity with label similarity. To tackle these problems, in this article, we propose two models to learn discriminative and modality-invariant representations for cross-modal retrieval. First, the dual generative adversarial networks are built to project multimodal data into a common representation space. Second, to model label relation dependencies and develop inter-dependent classifiers, we employ multi-hop graph neural networks (consisting of Probabilistic GNN and Iterative GNN), where the layer aggregation mechanism is suggested for using propagation information of various hops. Third, we propose a novel soft multi-label contrastive loss for cross-modal retrieval, with the soft positive sampling probability, which can align the representation similarity and the label similarity. Additionally, to adapt to incomplete-modal learning, which can have wider applications, we propose a modal reconstruction mechanism to generate missing features. Extensive experiments on three widely used benchmark datasets, i.e., NUS-WIDE, MIRFlickr, and MS-COCO, show the superiority of our proposed method.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
王烁完成签到,获得积分10
1秒前
suiyue完成签到 ,获得积分10
1秒前
jojojojojo完成签到,获得积分10
2秒前
xuhang完成签到,获得积分10
3秒前
4秒前
6秒前
ding应助明亮灭绝采纳,获得10
7秒前
9秒前
10秒前
10秒前
南宫白竹发布了新的文献求助10
10秒前
栖栖完成签到 ,获得积分10
11秒前
飞翔的小猪完成签到,获得积分20
11秒前
阿莲爱学习完成签到,获得积分10
11秒前
zhuizhu完成签到,获得积分10
12秒前
量子星尘发布了新的文献求助10
12秒前
云帆发布了新的文献求助10
13秒前
量子星尘发布了新的文献求助10
15秒前
15秒前
15秒前
悦果完成签到 ,获得积分10
16秒前
WZ发布了新的文献求助10
16秒前
千帆完成签到,获得积分10
16秒前
16秒前
ycg发布了新的文献求助10
17秒前
17秒前
科研通AI6.1应助英勇羿采纳,获得10
18秒前
Xulen完成签到,获得积分20
18秒前
英姑应助研友_Z6Qrbn采纳,获得10
18秒前
Cc完成签到 ,获得积分10
19秒前
Owen应助AAA采纳,获得10
20秒前
明亮灭绝发布了新的文献求助10
20秒前
邓邓完成签到 ,获得积分10
21秒前
今后应助juan采纳,获得10
22秒前
邓邓关注了科研通微信公众号
25秒前
量子星尘发布了新的文献求助10
25秒前
25秒前
Rabbithouse完成签到,获得积分10
27秒前
Momomo完成签到,获得积分0
27秒前
30秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Encyclopedia of Forensic and Legal Medicine Third Edition 5000
Introduction to strong mixing conditions volume 1-3 5000
Aerospace Engineering Education During the First Century of Flight 3000
Agyptische Geschichte der 21.30. Dynastie 3000
Les Mantodea de guyane 2000
Electron Energy Loss Spectroscopy 1500
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 计算机科学 有机化学 物理 生物化学 纳米技术 复合材料 内科学 化学工程 人工智能 催化作用 遗传学 数学 基因 量子力学 物理化学
热门帖子
关注 科研通微信公众号,转发送积分 5778959
求助须知:如何正确求助?哪些是违规求助? 5644592
关于积分的说明 15450766
捐赠科研通 4910444
什么是DOI,文献DOI怎么找? 2642671
邀请新用户注册赠送积分活动 1590372
关于科研通互助平台的介绍 1544741