计算机科学
棱锥(几何)
人工智能
特征提取
分割
图像分辨率
卷积(计算机科学)
计算机视觉
模式识别(心理学)
编码器
图像分割
遥感
联营
人工神经网络
操作系统
光学
物理
地质学
作者
Weiyan Qiu,Lingjia Gu,Fang Gao,Tao Jiang
出处
期刊:IEEE Geoscience and Remote Sensing Letters
[Institute of Electrical and Electronics Engineers]
日期:2023-01-01
卷期号:20: 1-5
被引量:24
标识
DOI:10.1109/lgrs.2023.3243609
摘要
Accurate building extraction from very high-resolution (VHR) remote sensing images plays an important role in urban dynamic monitoring, planning, and management. However, it is still a challenging task to achieve building extraction with high accuracy and integrity due to diverse building appearances and more complex ground background in VHR remote sensing images. Recently, unity networking (UNet) has been proven to be capable of feature extraction and semantic segmentation of remote sensing images. However, UNet cannot achieve sufficient multiscale and multilevel features with larger receptive fields. To address these problems, an improved network based on UNet structure (Refine-UNet) is proposed for extracting buildings from the VHR images. The proposed Refine-UNet mainly consists of an encoder module, a decoder module, and a refine skip connection scheme. The refine skip connection scheme is composed of an atrous spatial convolutional pyramid pooling (ASPP) module and several improved depthwise separable convolution (IDSC) modules. Experimental results on the Jilin-1 VHR datasets with a spatial resolution of 0.75 m demonstrate that compared with UNet, pyramid scene parsing network (PSPNet), DeepLabV3+, and a deep convolutional encoder-decoder architecture for image segmentation (SegNet), the proposed Refine-UNet can obtain more accurate building extraction results and achieve the best precision of 95.1% and intersection over union (IoU) of 87.0%, indicating the great practical potential.
科研通智能强力驱动
Strongly Powered by AbleSci AI