计算机科学
编码器
卷积神经网络
计算机视觉
图像分割
人工智能
分割
变压器
特征学习
深度学习
特征提取
模式识别(心理学)
工程类
电气工程
电压
操作系统
作者
Yan Chen,Quan Dong,Xiaofeng Wang,Qianchuan Zhang,Menglei Kang,Wenxiang Jiang,Mengyuan Wang,Lixiang Xu,Chen Zhang
标识
DOI:10.1109/jstars.2024.3358851
摘要
In the context of fast progress in deep learning, convolutional neural networks (CNNs) have been extensively applied to the semantic segmentation of remote sensing images and have achieved significant progress. However, certain limitations exist in capturing global contextual information due to the characteristics of convolutional local properties. Recently, Transformer has become a focus of research in computer vision and has shown great potential in extracting global contextual information, further promoting the development of semantic segmentation tasks. In this paper, we use ResNet50 as an encoder, embed the hybrid attention mechanism into Transformer, and propose a Transformer-based decoder. The Channel-Spatial Transformer Block (CSTB) further aggregates features by integrating the local feature maps extracted by the encoder with their associated global dependencies. At the same time, an adaptive approach is employed to reweight the interdependent channel maps to enhance the feature fusion. The Global Cross-Fusion Module (GCFM) combines the extracted complementary features to obtain more comprehensive semantic information. Extensive comparative experiments were conducted on the ISPRS Potsdam and Vaihingen datasets, where mIoU reached 78.06% and 76.37%, respectively. The outcomes of multiple ablation experiments also validate the effectiveness of the proposed method.
科研通智能强力驱动
Strongly Powered by AbleSci AI