计算机科学
人工智能
分割
信息抽取
计算机视觉
特征提取
编码器
深度学习
空间分析
图像分割
人工神经网络
模式识别(心理学)
遥感
地理
操作系统
作者
Yingxiao Xu,Hao Chen,Chun Du,Jun Li
标识
DOI:10.1109/tgrs.2021.3073923
摘要
With the boost of deep learning methods, road extraction has been widely used in city planning and autonomous driving. However, it is very challenging to extract roads around the thorny occlusion areas, even in high-resolution remote sensing images. Existing approaches regard road extraction as an isolated binary segmentation task and ignore the surroundings’ contextual information in the optical image itself, especially the potential dependence implied between roads and buildings. To address the occlusion problem, we proposed a spatial attention-based road extraction neural network using contextual relation between roads and buildings named MSACon to extract the roads more precisely. First, we employed an existing building extraction method to predict buildings in the optical images. Second, we calculated the signed distance map (SDM) based on the building extraction results (which may be inaccurate) as ambiguous auxiliary information to infer the optical images’ potential roads. Due to the color, lines, and texture between the optical images and the SDM are distinct, we then designed the two-branch encoder to extract features and integrated the cross-domain features into the road decoder by a spatial attention-based fusion mechanism. Experiments demonstrate that the proposed method achieves superior performance than other state-of-the-art approaches even with ambiguous auxiliary information. Furthermore, MSACon shows obvious advantages in finding inconspicuous roads in the optical images and eliminating noisy roads, especially when dealing with areas where buildings are located along the roads.
科研通智能强力驱动
Strongly Powered by AbleSci AI