Integrating Multi-Label Contrastive Learning With Dual Adversarial Graph Neural Networks for Cross-Modal Retrieval

计算机科学 人工智能 判别式 机器学习 特征学习 相似性(几何) 图形 模式识别(心理学) 语义学(计算机科学) 代表(政治) 理论计算机科学 图像(数学) 政治 政治学 法学 程序设计语言
作者
Shengsheng Qian,Dizhan Xue,Quan Fang,Changsheng Xu
出处
期刊:IEEE Transactions on Pattern Analysis and Machine Intelligence [IEEE Computer Society]
卷期号:45 (4): 1-18 被引量:58
标识
DOI:10.1109/tpami.2022.3188547
摘要

With the growing amount of multimodal data, cross-modal retrieval has attracted more and more attention and become a hot research topic. To date, most of the existing techniques mainly convert multimodal data into a common representation space where similarities in semantics between samples can be easily measured across multiple modalities. However, these approaches may suffer from the following limitations: 1) They overcome the modality gap by introducing loss in the common representation space, which may not be sufficient to eliminate the heterogeneity of various modalities; 2) They treat labels as independent entities and ignore label relationships, which is not conducive to establishing semantic connections across multimodal data; 3) They ignore the non-binary values of label similarity in multi-label scenarios, which may lead to inefficient alignment of representation similarity with label similarity. To tackle these problems, in this article, we propose two models to learn discriminative and modality-invariant representations for cross-modal retrieval. First, the dual generative adversarial networks are built to project multimodal data into a common representation space. Second, to model label relation dependencies and develop inter-dependent classifiers, we employ multi-hop graph neural networks (consisting of Probabilistic GNN and Iterative GNN), where the layer aggregation mechanism is suggested for using propagation information of various hops. Third, we propose a novel soft multi-label contrastive loss for cross-modal retrieval, with the soft positive sampling probability, which can align the representation similarity and the label similarity. Additionally, to adapt to incomplete-modal learning, which can have wider applications, we propose a modal reconstruction mechanism to generate missing features. Extensive experiments on three widely used benchmark datasets, i.e., NUS-WIDE, MIRFlickr, and MS-COCO, show the superiority of our proposed method.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
1秒前
1秒前
2秒前
和谐小白菜完成签到,获得积分20
2秒前
QQ发布了新的文献求助10
2秒前
2秒前
不是小苦瓜完成签到,获得积分20
2秒前
李健应助yy采纳,获得10
2秒前
隐形曼青应助捌柒采纳,获得10
2秒前
3秒前
黄先生发布了新的文献求助10
3秒前
orixero应助小夫采纳,获得10
3秒前
丘比特应助蓝天采纳,获得10
4秒前
4秒前
土豪的柔完成签到,获得积分10
5秒前
dxzero001发布了新的文献求助10
6秒前
mkljl发布了新的文献求助10
6秒前
冷静的豪完成签到 ,获得积分10
6秒前
科研通AI6.3应助hejiayan采纳,获得10
6秒前
幸福的凤灵完成签到,获得积分10
6秒前
8秒前
8秒前
CodeCraft应助不是小苦瓜采纳,获得10
8秒前
乔治完成签到,获得积分10
9秒前
9秒前
CQ完成签到,获得积分10
9秒前
lizhao0215发布了新的文献求助10
10秒前
zhu发布了新的文献求助10
12秒前
十三发布了新的文献求助10
12秒前
科目三应助Chemvenus采纳,获得10
12秒前
12秒前
小夫完成签到,获得积分10
12秒前
13秒前
北陌完成签到 ,获得积分10
14秒前
YY完成签到,获得积分10
14秒前
yk完成签到 ,获得积分10
14秒前
14秒前
可爱的函函应助风舞鱼采纳,获得30
14秒前
春春春关注了科研通微信公众号
14秒前
孤独卿完成签到,获得积分10
14秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
The Wiley Blackwell Companion to Diachronic and Historical Linguistics 3000
The impact of workplace variables on juvenile probation officers’ job satisfaction 1000
When the badge of honor holds no meaning anymore 1000
HANDBOOK OF CHEMISTRY AND PHYSICS 106th edition 1000
ASPEN Adult Nutrition Support Core Curriculum, Fourth Edition 1000
AnnualResearch andConsultation Report of Panorama survey and Investment strategy onChinaIndustry 1000
热门求助领域 (近24小时)
化学 材料科学 医学 生物 纳米技术 工程类 有机化学 化学工程 生物化学 计算机科学 物理 内科学 复合材料 催化作用 物理化学 光电子学 电极 细胞生物学 基因 无机化学
热门帖子
关注 科研通微信公众号,转发送积分 6280904
求助须知:如何正确求助?哪些是违规求助? 8099944
关于积分的说明 16934900
捐赠科研通 5348352
什么是DOI,文献DOI怎么找? 2842981
邀请新用户注册赠送积分活动 1820312
关于科研通互助平台的介绍 1677251