端元
高光谱成像
人工智能
计算机科学
模式识别(心理学)
遥感
传感器融合
融合
地质学
语言学
哲学
作者
Wei Gao,Yang Jing-yu,Yu Zhang,Youssef Akoudad,Jie Chen
标识
DOI:10.1109/tgrs.2025.3544037
摘要
Deep learning (DL) has recently garnered substantial interest in hyperspectral unmixing (HU) due to its exceptional learning capabilities. In particular, unsupervised unmixing methods based on autoencoders have become a research hotspot, with many existing networks focusing on the fusion of spatial and spectral information. However, the diversity of fusion structures makes it challenging to select appropriate modules that meet unmixing requirements, while the issue of endmember variability is often neglected. In this article, we propose a novel spatial-spectral adaptive fusion network (SSAF-Net) that accounts for endmember variability. The network consists of two cascaded encoders and a deep generative model (DGM) based on a variational autoencoder (VAE). The encoders perform local spatial-spectral information fusion through channel and spatial attention mechanisms, respectively, while self-perception loss facilitates global information fusion during the cascading process. In addition, we address endmember variability using a proportional perturbation model (PPM), learning the necessary endmember parameters through an elaborately designed DGM. Our SSAF-Net learns both endmember variability and the corresponding abundances in an unsupervised manner. Experimental results on a synthetic dataset and real-world datasets validate the significant superiority of SSAF-Net compared to other methods. The code for this work is available at https://github.com/yjysimply/SSAF-Net.
科研通智能强力驱动
Strongly Powered by AbleSci AI