期刊:IEEE Transactions on Geoscience and Remote Sensing [Institute of Electrical and Electronics Engineers] 日期:2023-01-01卷期号:61: 1-13被引量:5
标识
DOI:10.1109/tgrs.2023.3326545
摘要
Cloud-free image reconstruction is of great significance for improving the quality of optical satellite images that are vulnerable to bad weather. When cloud cover makes it impossible to obtain information under the cloud, auxiliary data is indispensable to guide the reconstruction of the cloud-contaminated area. Additionally, the areas that require continuous observation are mostly regions with complex features, which puts higher demands on the restoration of texture, color, and other details in data reconstruction. In this paper, we propose a Transformer-based generative adversarial network for cloud-free multispectral image reconstruction via multi-sensor data fusion in satellite images (TransGAN-CFR). Synthetic Aperture Radar (SAR) images that are not affected by clouds are used as auxiliary data and paired with cloudy optical images into the GAN generator. To take advantage of the deep-shallow features and global-local geographical proximity in remote sensing images, the proposed generator employs a hierarchical Encoder-Decoder structure, in which the Transformer blocks adopt a non-overlapping window multi-head self-attention (WMSA) mechanism and a modified feed-forward network though depth-wise convolutions and the gating mechanism. Besides, we introduce a Triplet loss function specifically designed for cloud removal tasks to provide the generated cloud-less image with greater proximity to the ground truth. Compared with seven state-of-the-art deep learning-based cloud removal models, our network can yield more natural cloud-free images with better visual performance and more accurate results in quantitative evaluation on the SEN12MS-CR dataset.