RoMo: Robust Unsupervised Multimodal Learning with Noisy Pseudo Labels

计算机科学 人工智能 无监督学习 模式识别(心理学) 机器学习
作者
Yongxiang Li,Yang Qin,Yuan Sun,Dezhong Peng,Xi Peng,Peng Hu
出处
期刊:IEEE transactions on image processing [Institute of Electrical and Electronics Engineers]
卷期号:33: 5086-5097 被引量:3
标识
DOI:10.1109/tip.2024.3426482
摘要

The rise of the metaverse and the increasing volume of heterogeneous 2D and 3D data have created a growing demand for cross-modal retrieval, enabling users to query semantically relevant data across different modalities. Existing methods heavily rely on class labels to bridge semantic correlations; however, collecting large-scale, well-labeled data is expensive and often impractical, making unsupervised learning more attractive and feasible. Nonetheless, unsupervised cross-modal learning faces challenges in bridging semantic correlations due to the lack of label information, leading to unreliable discrimination. In this paper, we reveal and study a novel problem: unsupervised cross-modal learning with noisy pseudo-labels. To address this issue, we propose a 2D-3D unsupervised multimodal learning framework that leverages multimodal data. Our framework consists of three key components: 1) Self-matching Supervision Mechanism (SSM) warms up the model to encapsulate discrimination into the representations in a self-supervised learning manner. 2) Robust Discriminative Learning (RDL) further mines the discrimination from the learned imperfect predictions after warming up. To tackle the noise in the predicted pseudo labels, RDL leverages a novel Robust Concentrating Learning Loss (RCLL) to alleviate the influence of the uncertain samples, thus embracing robustness against noisy pseudo labels. 3) Modality-invariance Learning Mechanism (MLM) minimizes the cross-modal discrepancy to enforce SSM and RDL to produce common representations. We conduct comprehensive experiments on four 2D-3D multimodal datasets, comparing our method against 14 state-of-the-art approaches, thereby demonstrating its effectiveness and superiority.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
科研通AI6.1应助kids采纳,获得10
1秒前
御风完成签到 ,获得积分10
1秒前
悦耳如彤完成签到,获得积分10
2秒前
战魂完成签到,获得积分10
2秒前
BOB完成签到,获得积分10
2秒前
3秒前
搜集达人应助薯愿采纳,获得10
3秒前
脑洞疼应助HJJHJH采纳,获得10
3秒前
刘运丽完成签到,获得积分10
3秒前
海布里的风完成签到 ,获得积分10
3秒前
无花果应助ivvi采纳,获得10
4秒前
4秒前
盈盈发布了新的文献求助10
4秒前
yh完成签到 ,获得积分10
4秒前
wuyi发布了新的文献求助10
5秒前
5秒前
现代的迎夏完成签到,获得积分10
5秒前
tiptip应助炸酱面采纳,获得10
6秒前
小舒完成签到 ,获得积分10
6秒前
蒙豆儿发布了新的文献求助10
7秒前
Leexxxhaoo完成签到,获得积分10
7秒前
9秒前
wkkkkkkk完成签到,获得积分10
9秒前
大个应助Yangpc采纳,获得10
9秒前
可爱的尔芙完成签到,获得积分20
10秒前
粥粥关注了科研通微信公众号
10秒前
10秒前
852应助苦涩油麦菜采纳,获得10
11秒前
田様应助简单花花采纳,获得10
11秒前
Ronalsen完成签到 ,获得积分10
11秒前
11秒前
12秒前
12秒前
12秒前
M_完成签到 ,获得积分10
12秒前
13秒前
上官若男应助盈盈采纳,获得10
13秒前
阳先森完成签到 ,获得积分10
14秒前
JamesPei应助yichen采纳,获得10
14秒前
zzc7应助DONGmumu采纳,获得10
14秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Kinesiophobia : a new view of chronic pain behavior 2000
Research for Social Workers 1000
The Social Psychology of Citizenship 800
Mastering New Drug Applications: A Step-by-Step Guide (Mastering the FDA Approval Process Book 1) 800
Signals, Systems, and Signal Processing 510
Discrete-Time Signals and Systems 510
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 计算机科学 有机化学 物理 生物化学 纳米技术 复合材料 内科学 化学工程 人工智能 催化作用 遗传学 数学 基因 量子力学 物理化学
热门帖子
关注 科研通微信公众号,转发送积分 5917455
求助须知:如何正确求助?哪些是违规求助? 6877627
关于积分的说明 15802725
捐赠科研通 5043592
什么是DOI,文献DOI怎么找? 2714333
邀请新用户注册赠送积分活动 1666834
关于科研通互助平台的介绍 1605761