计算机科学
图像融合
计算机视觉
图像分辨率
云计算
迭代重建
对偶(语法数字)
遥感
人工智能
图像(数学)
地理
操作系统
文学类
艺术
作者
W. Liu,Yonghua Jiang,Jingyin Wang,Guo Zhang,Da Li,Huaibo Song,Jun Yang,Xiao Huang,Xinghua Li
出处
期刊:IEEE Geoscience and Remote Sensing Letters
[Institute of Electrical and Electronics Engineers]
日期:2024-01-01
卷期号:: 1-1
被引量:1
标识
DOI:10.1109/lgrs.2024.3356533
摘要
Large-ratio cloud occlusion significantly hampers the utilization of high-resolution remote sensing imagery. The existing reconstruction methods (1) overlook the problem of reconstructed and composite images sharing high-and low-level semantic and visual attributes in non-reconstructed regions, exacerbating the pronounced boundary effects; (2) neglect appearance discrepancies between reconstructed and non-reconstructed regions, leading to spectral degradation, and texture loss; and (3) overlook the problem of reconstructing large-ratio missing information. To address these issues, a global and local dual fusion network is proposed in this study for large-ratio cloud occlusion removal in high-resolution remote sensing images. The global foreground–background aware attention module tackles shared high-level semantic features, whereas the local visual feature enhancement module addresses appearance differences. The global and local dual fusion network combines the Sobel and reconstruction loss functions for effective reconstruction by employing a two-stage fusion strategy. Compared to the classical recurrent feature reasoning network, spatiotemporal generator network, spatial-temporal-spectral convolutional neural network, and bishift network, the proposed model demonstrates superior quantitative and visual reconstruction outcomes for the 40%, 50%, and 70% missing ratios of Gaofen-1 (2 m).
科研通智能强力驱动
Strongly Powered by AbleSci AI