计算机科学
人工智能
先验概率
几何变换
判别式
分割
编码器
模式识别(心理学)
推论
特征学习
特征(语言学)
计算机视觉
图像(数学)
贝叶斯概率
哲学
操作系统
语言学
作者
Xin Li,Feng Xu,Fan Liu,Yao Tong,Xin Lyu,Jun Zhou
出处
期刊:IEEE Transactions on Geoscience and Remote Sensing
[Institute of Electrical and Electronics Engineers]
日期:2024-01-01
卷期号:62: 1-18
被引量:6
标识
DOI:10.1109/tgrs.2023.3339291
摘要
High spatial resolution remote sensing images (HRRSIs) contain intricate details and varied spectral distributions, making their semantic segmentation a challenging task. To address this problem, it is crucial to adequately capture both local and global contexts to reduce semantic ambiguity. While self-attention modules in vision transformers capture long-range context, they tend to sacrifice local details. In this article, we propose a geometric prior-guided interactive network (GPINet), a hybrid network that refines features across encoder and decoder stages. First of all, a dual branch structure encoder with local-global interaction modules (LGIMs) is designed to fully exploit local and global contexts for feature refinement. Unlike commonly used skip connections or concatenations, the LGIMs bilaterally couple and exchange CNN features with transformer features by lossless transformation and elaborating cross-attention. Moreover, we introduce a geometric prior generation module (GPGM) that iteratively updates the randomly initialized geometric prior. Subsequently, the geometric priors are stored and used to guide feature recovery. Finally, a weighted summation is applied to the upsampled decoded features and geometric priors. By comprehensively capturing contexts and enabling lossless decoding and deterministic inference, GPINet allows the network to learn discriminative representations for accurately specifying pixel-level semantics. Experiments on three benchmark datasets demonstrate the superiority of the proposed GPINet over state-of-the-art methods. Furthermore, we validate the effectiveness of geometric priors and compare the model sizes.
科研通智能强力驱动
Strongly Powered by AbleSci AI