高光谱成像
计算机科学
激光雷达
判别式
自编码
人工智能
模式识别(心理学)
特征学习
测距
代表(政治)
特征(语言学)
深度学习
遥感
计算机视觉
地理
政治
哲学
电信
语言学
法学
政治学
作者
Zhu Han,Danfeng Hong,Lianru Gao,Jing Yao,Bing Zhang,Jocelyn Chanussot
出处
期刊:IEEE Transactions on Geoscience and Remote Sensing
[Institute of Electrical and Electronics Engineers]
日期:2022-01-01
卷期号:60: 1-13
被引量:65
标识
DOI:10.1109/tgrs.2022.3155794
摘要
Deep learning (DL) has aroused wide attention in hyperspectral unmixing (HU) owing to its powerful feature representation ability. As a representative of unsupervised DL approaches, autoencoder (AE) has been proven to be effective to better capture nonlinear components of hyperspectral images than the traditional model-driven linearized methods. However, only using hyperspectral images for unmixing fails to distinguish objects in complex scene, especially for different endmembers with similar materials. To overcome this limitation, we propose a novel multimodal unmixing network for hyperspectral images, called MUNet, by considering the height differences of light detection and ranging (LiDAR) data in a squeeze-and-excitation (SE)-driven attention fashion to guide the unmixing process, yielding performance improvement. MUNet is capable of fusing multimodal information and using the attention map derived by LiDAR to aid network that focuses on more discriminative and meaningful spatial information regarding scenes. Moreover, attribute profile (AP) is adopted to extract the geometrical structures of different objects to better model the spatial information of LiDAR. Experimental results on synthetic and real datasets demonstrate the effectiveness and superiority of the proposed method compared with several state-of-the-art unmixing algorithms. The codes will be available at https://github.com/hanzhu97702/IEEE_TGRS_MUNet, contributing to the remote sensing community.
科研通智能强力驱动
Strongly Powered by AbleSci AI