多源
域适应
模式识别(心理学)
上下文图像分类
多光谱图像
合成孔径雷达
领域(数学分析)
生成对抗网络
分类器(UML)
特征(语言学)
作者
Shunping Ji,Dingpan Wang,Muying Luo
出处
期刊:IEEE Transactions on Geoscience and Remote Sensing
[Institute of Electrical and Electronics Engineers]
日期:2021-05-01
卷期号:59 (5): 3816-3828
被引量:5
标识
DOI:10.1109/tgrs.2020.3020804
摘要
The accuracy of remote sensing image segmentation and classification is known to dramatically decrease when the source and target images are from different sources; while deep learning-based models have boosted performance, they are only effective when trained with a large number of labeled source images that are similar to the target images. In this article, we propose a generative adversarial network (GAN) based domain adaptation for land cover classification using new target remote sensing images that are enormously different from the labeled source images. In GANs, the source and target images are fully aligned in the image space, feature space, and output space domains in two stages via adversarial learning. The source images are translated to the style of the target images, which are then used to train a fully convolutional network (FCN) for semantic segmentation to classify the land cover types of the target images. The domain adaptation and segmentation are integrated to form an end-to-end framework. The experiments that we conducted on a multisource data set covering more than 3500 km2 with 51 560 $256\times 256$ high-resolution satellite images in Wuhan city and a cross-city data set with 11 $383\,\,256\times 256$ aerial images in Potsdam and Vaihingen demonstrated that our method exceeded the recent GAN-based domain adaptation methods by at least 6.1% and 4.9% in the mean intersection over union (mIoU) and overall accuracy (OA) indexes, respectively. We also proved that our GAN is a generic framework that can be implemented for other domain transfer methods to boost their performance.
科研通智能强力驱动
Strongly Powered by AbleSci AI