计算机科学
判别式
特征(语言学)
人工智能
变更检测
背景(考古学)
模式识别(心理学)
编码器
语言学
生物
操作系统
哲学
古生物学
作者
Xiaokang Zhang,Weikang Yu,Man-On Pun
出处
期刊:IEEE Transactions on Geoscience and Remote Sensing
[Institute of Electrical and Electronics Engineers]
日期:2022-01-01
卷期号:60: 1-18
被引量:15
标识
DOI:10.1109/tgrs.2022.3157721
摘要
Deep learning (DL) approaches based on convolutional encoder–decoder networks have shown promising results in bitemporal change detection. However, their performance is limited by insufficient contextual information aggregation because they cannot fully capture the implicit contextual dependency relationships among feature maps at different levels. Moreover, harvesting long-range contextual information typically incurs high computational complexity. To circumvent these challenges, we propose multilevel deformable attention-aggregated networks (MLDANets) to effectively learn long-range dependencies across multiple levels of bitemporal convolutional features for multiscale context aggregation. Specifically, a multilevel change-aware deformable attention (MCDA) module consisting of linear projections with learnable parameters is built based on multihead self-attention (SA) with a deformable sampling strategy. It is applied in the skip connections of an encoder–decoder network taking a bitemporal deep feature hypersequence (BDFH) as input. MCDA can progressively address a set of informative sampling locations in multilevel feature maps for each query element in the BDFH. Simultaneously, MCDA learns to characterize beneficial information from different spatial and feature subspaces of BDFH using multiple attention heads for change perception. As a result, contextual dependencies across multiple levels of bitemporal feature maps can be adaptively aggregated via attention weights to generate multilevel discriminative change-aware representations. Experiments on very-high-resolution (VHR) datasets verify that MLDANets outperform state-of-the-art change detection approaches with dramatically faster training convergence and high computational efficiency.
科研通智能强力驱动
Strongly Powered by AbleSci AI