Mind the Gap: Learning Modality-Agnostic Representations With a Cross-Modality UNet

模态(人机交互) 计算机科学 人工智能
作者
Xin Niu,Enyi Li,Jinchao Liu,Yan Wang,Margarita Osadchy,Yongchun Fang
出处
期刊:IEEE transactions on image processing [Institute of Electrical and Electronics Engineers]
卷期号:33: 655-670 被引量:3
标识
DOI:10.1109/tip.2023.3348656
摘要

Cross-modality recognition has many important applications in science, law enforcement and entertainment. Popular methods to bridge the modality gap include reducing the distributional differences of representations of different modalities, learning indistinguishable representations or explicit modality transfer. The first two approaches suffer from the loss of discriminant information while removing the modality-specific variations. The third one heavily relies on the successful modality transfer, could face catastrophic performance drop when explicit modality transfers are not possible or difficult. To tackle this problem, we proposed a compact encoder-decoder neural module (cmUNet) to learn modality-agnostic representations while retaining identity-related information. This is achieved through cross-modality transformation and in-modality reconstruction, enhanced by an adversarial/perceptual loss which encourages indistinguishability of representations in the original sample space. For cross-modality matching, we propose MarrNet where cmUNet is connected to a standard feature extraction network which takes as inputs the modality-agnostic representations and outputs similarity scores for matching. We validated our method on five challenging tasks, namely Raman-infrared spectrum matching, cross-modality person re-identification and heterogeneous (photo-sketch, visible-near infrared and visible-thermal) face recognition, where MarrNet showed superior performance compared to state-of-the-art methods. Furthermore, it is observed that a cross-modality matching method could be biased to extract discriminant information from partial or even wrong regions, due to incompetence of dealing with modality gaps, which subsequently leads to poor generalization. We show that robustness to occlusions can be an indicator of whether a method can well bridge the modality gap. This, to our knowledge, has been largely neglected in the previous works. Our experiments demonstrated that MarrNet exhibited excellent robustness against disguises and occlusions, and outperformed existing methods with a large margin (>10%). The proposed cmUNet is a meta-approach and can be used as a building block for various applications.

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
刚刚
Feng发布了新的文献求助10
刚刚
MM完成签到 ,获得积分10
1秒前
1秒前
文学痞发布了新的文献求助10
1秒前
1秒前
充电宝应助11采纳,获得10
1秒前
1秒前
小二郎应助跳跃盼波采纳,获得10
2秒前
传奇3应助13508104971采纳,获得10
2秒前
Tayyy发布了新的文献求助20
2秒前
香蕉觅云应助俭朴的乐巧采纳,获得10
2秒前
好吗好的发布了新的文献求助10
3秒前
伶俐雪曼完成签到,获得积分10
3秒前
Jasper应助蒋美桥采纳,获得80
3秒前
yang发布了新的文献求助10
3秒前
风中的奎完成签到,获得积分20
3秒前
孟严青完成签到,获得积分0
4秒前
5秒前
文文完成签到 ,获得积分10
5秒前
5秒前
好大一碗粥完成签到 ,获得积分10
5秒前
Hibiki完成签到,获得积分10
5秒前
Oreki完成签到,获得积分10
5秒前
Wuzhhhh发布了新的文献求助10
5秒前
无极微光应助殷勤的紫槐采纳,获得20
6秒前
完美世界应助指北针采纳,获得10
6秒前
6秒前
科研通AI6应助研友_LN7x6n采纳,获得10
6秒前
可爱的函函应助机智忆文采纳,获得10
7秒前
8秒前
科研通AI6应助顺利的藏今采纳,获得10
8秒前
8秒前
连秋完成签到,获得积分10
9秒前
细心小鸽子完成签到,获得积分10
9秒前
9秒前
D&L发布了新的文献求助10
10秒前
俭朴凝冬完成签到,获得积分10
10秒前
忧郁的白竹完成签到,获得积分20
10秒前
11秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
List of 1,091 Public Pension Profiles by Region 1561
Current Trends in Drug Discovery, Development and Delivery (CTD4-2022) 800
Foregrounding Marking Shift in Sundanese Written Narrative Segments 600
Holistic Discourse Analysis 600
Beyond the sentence: discourse and sentential form / edited by Jessica R. Wirth 600
Science of Synthesis: Houben–Weyl Methods of Molecular Transformations 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 纳米技术 计算机科学 内科学 化学工程 复合材料 物理化学 基因 遗传学 催化作用 冶金 量子力学 光电子学
热门帖子
关注 科研通微信公众号,转发送积分 5524260
求助须知:如何正确求助?哪些是违规求助? 4614804
关于积分的说明 14544904
捐赠科研通 4552714
什么是DOI,文献DOI怎么找? 2494932
邀请新用户注册赠送积分活动 1475626
关于科研通互助平台的介绍 1447330