计算机科学
背景(考古学)
遥感
萃取(化学)
卫星
计算机视觉
特征提取
人工智能
地质学
工程类
色谱法
航空航天工程
古生物学
化学
作者
Zhilin Qu,M. Li,Zehua Chen
标识
DOI:10.1109/lgrs.2025.3545881
摘要
High-resolution satellite imagery for road extraction plays a crucial role in urban planning and geographic information updates. However, the discontinuity and breakability of extracted road images pose challenges to extraction methods. Additionally, the complexity of the remote sensing imagery background can lead to interference from similar objects in the surrounding environment. To alleviate these problems, we propose a Context-Aware and Road-Enhancement road extraction network (CARENet). To enhance the continuity and integrity of the extracted roads, a bidirectional strip feature extraction module (BSFEM) is designed in skip connections. This module is a novel strip feature extraction method that can preserve edge information of the roads at each scale and pass it to the decoder, providing rich and accurate road detail features. Subsequently, a dilated conv-based Selective Scan module (DBSSM) is designed to achieve linear attention while minimizing the negative effects of complex backgrounds. The DBSSM consists of multiple context-aware blocks using 2-D Selective Scan (SS2D) to capture contextual relationships. Experiments conducted on two public road datasets demonstrate that CARENet outperforms several recent methods in various evaluation metrics, including Intersection over Union (IoU) and ${F}1$ -score. Our source code is available at https://github.com/ZehuaChenLab/CARENet.
科研通智能强力驱动
Strongly Powered by AbleSci AI