X-Shaped Interactive Autoencoders With Cross-Modality Mutual Learning for Unsupervised Hyperspectral Image Super-Resolution

高光谱成像 计算机科学 人工智能 相互信息 稳健性(进化) 模态(人机交互) 模式识别(心理学) 多光谱图像 无监督学习 学习迁移 图像分辨率 生物化学 化学 基因
作者
Jiaxin Li,Ke Zheng,Zhi Li,Lianru Gao,Xiuping Jia
出处
期刊:IEEE Transactions on Geoscience and Remote Sensing [Institute of Electrical and Electronics Engineers]
卷期号:61: 1-17 被引量:5
标识
DOI:10.1109/tgrs.2023.3300043
摘要

Hyperspectral image super-resolution can compensate for the incompleteness of single-sensor imaging and provide desirable products with both high spatial and spectral resolution. Among them, unmixing-inspired networks have drawn considerable attention owing to their straightforward unsupervised paradigm. However, most do not fully capture and utilize the multi-modal information due to their limited representation ability of constructed networks, hence leaving large room for further improvement. To this end, we propose an X-shaped interactive autoencoders network with cross-modality mutual learning between hyperspectral and multispectral data, XINet for short, to cope with this problem. Generally, it employs a coupled structure equipped with two autoencoders, aiming at deriving latent abundances and corresponding endmembers from input correspondence. Inside the network, a novel X-shaped interactive architecture is designed by coupling two disjointed U-Nets together via a parameter-shared strategy, which not only enables sufficient information flow between two modalities but also leads to informative spatial-spectral features. Considering the complementarity across each modality, a cross-modality mutual learning module is constructed to further transfer knowledge from one modality to another, allowing for better utilization of multi-modal features. Moreover, a joint self-supervised loss is proposed to effectively optimize our proposed XINet, enabling an unsupervised manner without external triplets supervision. Extensive experiments, including super-resolved results in four datasets, robustness analysis, and extension to other applications, are conducted, and the superiority of our method is demonstrated.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
悦耳笑蓝发布了新的文献求助10
刚刚
yidashi发布了新的文献求助10
1秒前
SS完成签到,获得积分10
1秒前
1秒前
1秒前
1秒前
Akim应助HHHHHN采纳,获得10
1秒前
一个小菜鸡完成签到,获得积分10
2秒前
Jasper应助calmxp采纳,获得10
2秒前
ShaLi123发布了新的文献求助10
2秒前
孙博发布了新的文献求助10
2秒前
2秒前
2秒前
2秒前
咲韶发布了新的文献求助10
2秒前
3秒前
司空豁应助故里采纳,获得10
3秒前
Mat应助超级丝采纳,获得10
3秒前
量子星尘发布了新的文献求助10
3秒前
小马甲应助chenfaju采纳,获得10
4秒前
小鲸鱼完成签到,获得积分10
4秒前
橘子完成签到 ,获得积分10
4秒前
Lucas应助Pramdx采纳,获得10
4秒前
4秒前
Wang完成签到,获得积分10
5秒前
LL完成签到,获得积分10
5秒前
箴琪发布了新的文献求助10
5秒前
烟花应助谜迪采纳,获得10
5秒前
kk发布了新的文献求助10
5秒前
英俊的铭应助sherry采纳,获得10
5秒前
5秒前
万能图书馆应助xiaotaiyang采纳,获得10
5秒前
zzholiver完成签到,获得积分20
6秒前
6秒前
123完成签到 ,获得积分10
6秒前
安徽梁朝伟完成签到,获得积分10
6秒前
7秒前
求助蚂蚁发布了新的文献求助10
7秒前
leslie发布了新的文献求助10
7秒前
qql发布了新的文献求助10
8秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Manipulating the Mouse Embryo: A Laboratory Manual, Fourth Edition 1000
Comparison of spinal anesthesia and general anesthesia in total hip and total knee arthroplasty: a meta-analysis and systematic review 500
INQUIRY-BASED PEDAGOGY TO SUPPORT STEM LEARNING AND 21ST CENTURY SKILLS: PREPARING NEW TEACHERS TO IMPLEMENT PROJECT AND PROBLEM-BASED LEARNING 500
Founding Fathers The Shaping of America 500
Distinct Aggregation Behaviors and Rheological Responses of Two Terminally Functionalized Polyisoprenes with Different Quadruple Hydrogen Bonding Motifs 460
Writing to the Rhythm of Labor Cultural Politics of the Chinese Revolution, 1942–1976 300
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 纳米技术 计算机科学 内科学 化学工程 复合材料 物理化学 基因 催化作用 遗传学 冶金 电极 光电子学
热门帖子
关注 科研通微信公众号,转发送积分 4572570
求助须知:如何正确求助?哪些是违规求助? 3993286
关于积分的说明 12361873
捐赠科研通 3666367
什么是DOI,文献DOI怎么找? 2020752
邀请新用户注册赠送积分活动 1054961
科研通“疑难数据库(出版商)”最低求助积分说明 942355