计算机科学
特征提取
人工智能
变更检测
变压器
编码器
像素
模式识别(心理学)
计算机视觉
特征(语言学)
语义特征
电压
工程类
语言学
哲学
电气工程
操作系统
作者
Yaping Wu,Lu Li,Nan Wang,Wei Li,Junfang Fan,Ran Tao,Xuezhi Wen,Yanfeng Wang
出处
期刊:IEEE Transactions on Geoscience and Remote Sensing
[Institute of Electrical and Electronics Engineers]
日期:2023-01-01
卷期号:61: 1-15
被引量:1
标识
DOI:10.1109/tgrs.2023.3326813
摘要
Change detection (CD) in remote sensing images is a critical task that has achieved significant success by deep learning. Current networks often employ pixel-based differencing, proportion, classification-based, or feature concatenation methods to represent changes of interest. However, these methods fail to effectively detect the desired changes, as they are highly sensitive to factors such as atmospheric conditions, lighting variations, and phenological variations, resulting in detection errors. Inspired by the Transformer structure, we adopt a cross-attention mechanism to more robustly extract feature differences between bitemporal images. The motivation of the method is based on the assumption that if there is no change between image pairs, the semantic features from one temporal image can well be represented by the semantic features from another temporal image. Conversely if there is a change, there are significant reconstruction errors. Therefore, a Cross Swin Transformer based Siamese U-shaped network namely CSTSUNet is proposed for remote sensing change detection. CSTSUnet consists of encoder, difference feature extraction, and decoder. The encoder is based on a hierarchical Resnet with the Siamese U-net structure, allowing parallel processing of bitemporal images and extraction of multi-scale features. The difference feature extraction consists of four difference feature extraction modules that compute difference feature at multiple scales. In this module, Cross Swin Transformer is employed in each difference feature extraction module to communicate the information of bitemporal images. The decoder takes in the multi-scale difference features as input, injects details and boundaries iteratively level by level, and makes the change map more and more accurate. We conduct experiments on three public datasets, and the experimental results demonstrate that the proposed CSTSUNet outperforms other state-of-the-art methods in terms of both qualitative and quantitative analyses. Our code is available at https://github.com/l7170/CSTSUNet.git.
科研通智能强力驱动
Strongly Powered by AbleSci AI