Tianyu Song,Shumin Fan,Pengpeng Li,Jiyu Jin,Guiyue Jin,Lei Fan
出处
期刊:IEEE Geoscience and Remote Sensing Letters [Institute of Electrical and Electronics Engineers] 日期:2023-01-01卷期号:20: 1-5被引量:17
标识
DOI:10.1109/lgrs.2023.3319832
摘要
The existing remote sensing (RS) image dehazing methods based on deep learning have sought help from the convolutional frameworks. Nevertheless, the inherent limitations of convolution, i.e ., local receptive fields and independent input elements, curtail the network from learning the long-range dependencies and non-uniform distributions. To this end, we design an effective RS image dehazing Transformer architecture, denoted as RSDformer. Firstly, given the irregular shapes and non-uniform distributions of haze in RS images, capturing both local and non-local features is crucial for RS image dehazing models. Hence, we propose a detail-compensated transposed attention to extract the global and local dependencies across channels. Secondly, to enhance the ability to learn degraded features and better guide the restoration process, we develop a dual-frequency adaptive block with dynamic filters. Finally, a dynamic gated fusion block is designed to achieve fuse and exchange features across different scales effectively. In this way, the model exhibits robust capabilities to capture dependencies from both global and local areas, resulting in improving image content recovery. Extensive experiments prove that the proposed method obtains more appealing performances against other competitive methods.