高光谱成像
计算机科学
激光雷达
多光谱图像
空间分析
人工智能
遥感
卷积(计算机科学)
情态动词
卷积神经网络
模式识别(心理学)
人工神经网络
地理
化学
高分子化学
作者
Xianghai Wang,Junheng Zhu,Yining Feng,Lu Wang
出处
期刊:IEEE Geoscience and Remote Sensing Letters
[Institute of Electrical and Electronics Engineers]
日期:2024-01-01
卷期号:21: 1-5
被引量:4
标识
DOI:10.1109/lgrs.2024.3350633
摘要
The acquisition of multisource remote-sensing (RS) data has become more and more convenient due to the boom and innovation of RS imaging technology. The fusion and classification of hyperspectral images (HSIs) and Light Detection and Ranging (LiDAR) data has become a research hotspot because of their excellent complementarity and the vigorous development of deep learning (DL) provides effective methods. Most of the existing methods based on convolution neural networks (CNNs) have fixed convolution kernels, making it difficult to extract multiscale detailed features. In this letter, we propose a multiscale pyramid fusion framework based on spatial–spectral cross-modal attention (S2CA) for HSIs and LiDAR classification, which has strong multiscale information learning ability, especially in areas with complex information changes, thereby improving classification accuracy. Multiscale pyramid convolution is used to extract multiscale features, and an effective feature recalibration (EFR) module is used to enhance features and suppress useless information at each scale. To increase the interactivity of information between modes, we propose an S2CA module, which uses the features of different modes to enhance each other. Three real public datasets are used for the experiment. Compared with the existing advanced methods, the proposed method achieves the best results. The source code of the multiscale S2CA network (MS2CANet) will be public at https://github.com/junhengzhu/MS2CANet .
科研通智能强力驱动
Strongly Powered by AbleSci AI