高光谱成像
人工智能
计算机科学
深度学习
卷积神经网络
学习迁移
模式识别(心理学)
特征提取
上下文图像分类
人工神经网络
降维
图像(数学)
机器学习
作者
Bing Liu,Xuchu Yu,Anzhu Yu,Gang Wan
标识
DOI:10.1117/1.jrs.12.026028
摘要
The deep learning methods have recently been successfully explored for hyperspectral image classification. However, it may not perform well when training samples are scarce. A deep transfer learning method is proposed to improve the hyperspectral image classification performance in the situation of limited training samples. First, a Siamese network composed of two convolutional neural networks is designed for local image descriptors extraction. Subsequently, the pretrained Siamese network model is reused to transfer knowledge to the hyperspectral image classification tasks by feeding deep features extracted from each band into a recurrent neural network. Indeed, a deep convolutional recurrent neural network is constructed for hyperspectral image classification by this way. Finally, the entire network is tuned by a small number of labeled samples. The important characteristic of the designed model is that the deep convolutional recurrent neural network provides a way of utilizing the spatial–spectral features without dimension reduction. Furthermore, the transfer learning method provides an opportunity to train such deep model with limited labeled samples. Experiments on three widely used hyperspectral datasets demonstrate that the proposed transfer learning method can improve the classification performance and competitive classification results can be achieved when compared with state-of-the-art methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI