Integrating Multi-Label Contrastive Learning With Dual Adversarial Graph Neural Networks for Cross-Modal Retrieval

计算机科学 人工智能 判别式 机器学习 特征学习 相似性(几何) 图形 模式识别(心理学) 语义学(计算机科学) 代表(政治) 理论计算机科学 图像(数学) 政治 政治学 法学 程序设计语言
作者
Shengsheng Qian,Dizhan Xue,Quan Fang,Changsheng Xu
出处
期刊:IEEE Transactions on Pattern Analysis and Machine Intelligence [IEEE Computer Society]
卷期号:45 (4): 1-18 被引量:58
标识
DOI:10.1109/tpami.2022.3188547
摘要

With the growing amount of multimodal data, cross-modal retrieval has attracted more and more attention and become a hot research topic. To date, most of the existing techniques mainly convert multimodal data into a common representation space where similarities in semantics between samples can be easily measured across multiple modalities. However, these approaches may suffer from the following limitations: 1) They overcome the modality gap by introducing loss in the common representation space, which may not be sufficient to eliminate the heterogeneity of various modalities; 2) They treat labels as independent entities and ignore label relationships, which is not conducive to establishing semantic connections across multimodal data; 3) They ignore the non-binary values of label similarity in multi-label scenarios, which may lead to inefficient alignment of representation similarity with label similarity. To tackle these problems, in this article, we propose two models to learn discriminative and modality-invariant representations for cross-modal retrieval. First, the dual generative adversarial networks are built to project multimodal data into a common representation space. Second, to model label relation dependencies and develop inter-dependent classifiers, we employ multi-hop graph neural networks (consisting of Probabilistic GNN and Iterative GNN), where the layer aggregation mechanism is suggested for using propagation information of various hops. Third, we propose a novel soft multi-label contrastive loss for cross-modal retrieval, with the soft positive sampling probability, which can align the representation similarity and the label similarity. Additionally, to adapt to incomplete-modal learning, which can have wider applications, we propose a modal reconstruction mechanism to generate missing features. Extensive experiments on three widely used benchmark datasets, i.e., NUS-WIDE, MIRFlickr, and MS-COCO, show the superiority of our proposed method.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
满意太兰发布了新的文献求助10
2秒前
斯文败类应助懵懂的丸子采纳,获得10
3秒前
manman发布了新的文献求助10
3秒前
Juan_He发布了新的文献求助10
4秒前
5秒前
金金金发布了新的文献求助20
6秒前
6秒前
6秒前
发发发应助淡淡的汉堡采纳,获得30
6秒前
6秒前
zzzwww完成签到,获得积分10
7秒前
9秒前
10秒前
充电宝应助老小孩采纳,获得10
10秒前
chai完成签到,获得积分10
11秒前
naraku完成签到,获得积分10
11秒前
yhhhhhha完成签到,获得积分10
11秒前
李健应助长情豁采纳,获得10
11秒前
11秒前
余歌发布了新的文献求助10
12秒前
liam发布了新的文献求助10
12秒前
喵喵完成签到,获得积分10
14秒前
Bellona发布了新的文献求助10
14秒前
龙龖龘完成签到,获得积分10
14秒前
李健应助yy采纳,获得10
14秒前
15秒前
Andy完成签到,获得积分10
15秒前
hint应助淡淡的汉堡采纳,获得30
17秒前
史云帆发布了新的文献求助10
17秒前
chai发布了新的文献求助10
18秒前
pluto应助Atopos采纳,获得10
18秒前
着急的聪展完成签到,获得积分20
19秒前
19秒前
搜集达人应助阔达的秀发采纳,获得10
19秒前
zq完成签到,获得积分10
21秒前
21秒前
21秒前
liuwei完成签到,获得积分20
21秒前
killing2发布了新的文献求助10
22秒前
22秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Les Mantodea de Guyane Insecta, Polyneoptera 2000
Emmy Noether's Wonderful Theorem 1200
Leading Academic-Practice Partnerships in Nursing and Healthcare: A Paradigm for Change 800
基于非线性光纤环形镜的全保偏锁模激光器研究-上海科技大学 800
Signals, Systems, and Signal Processing 610
Research Methods for Business: A Skill Building Approach, 9th Edition 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 纳米技术 工程类 有机化学 化学工程 生物化学 计算机科学 物理 内科学 复合材料 催化作用 物理化学 光电子学 电极 细胞生物学 基因 无机化学
热门帖子
关注 科研通微信公众号,转发送积分 6412084
求助须知:如何正确求助?哪些是违规求助? 8231229
关于积分的说明 17469530
捐赠科研通 5464891
什么是DOI,文献DOI怎么找? 2887479
邀请新用户注册赠送积分活动 1864234
关于科研通互助平台的介绍 1702915