计算机科学
卷积神经网络
合成孔径雷达
人工智能
深度学习
激光雷达
测距
模式识别(心理学)
遥感
机器学习
数据挖掘
电信
地质学
作者
Xin Wu,Danfeng Hong,Jocelyn Chanussot
出处
期刊:IEEE Transactions on Geoscience and Remote Sensing
[Institute of Electrical and Electronics Engineers]
日期:2021-11-02
卷期号:60: 1-10
被引量:168
标识
DOI:10.1109/tgrs.2021.3124913
摘要
In recent years, enormous research has been made to improve the classification performance of single-modal remote sensing (RS) data. However, with the ever-growing availability of RS data acquired from satellite or airborne platforms, simultaneous processing and analysis of multimodal RS data pose a new challenge to researchers in the RS community. To this end, we propose a deep-learning-based new framework for multimodal RS data classification, where convolutional neural networks (CNNs) are taken as a backbone with an advanced cross-channel reconstruction module, called CCR-Net. As the name suggests, CCR-Net learns more compact fusion representations of different RS data sources by the means of the reconstruction strategy across modalities that can mutually exchange information in a more effective way. Extensive experiments conducted on two multimodal RS datasets, including hyperspectral (HS) and light detection and ranging (LiDAR) data, i.e., the Houston2013 dataset, and HS and synthetic aperture radar (SAR) data, i.e., the Berlin dataset, demonstrate the effectiveness and superiority of the proposed CCR-Net in comparison with several state-of-the-art multimodal RS data classification methods. The codes will be openly and freely available at https://github.com/danfenghong/IEEE_TGRS_CCR-Net for the sake of reproducibility.
科研通智能强力驱动
Strongly Powered by AbleSci AI