计算机科学
残余物
人工智能
特征提取
激光雷达
点云
编码器
卷积神经网络
特征(语言学)
模式识别(心理学)
目标检测
计算机视觉
数据挖掘
遥感
地理
操作系统
哲学
语言学
算法
作者
Jianfeng Huang,Xinchang Zhang,Qinchuan Xin,Ying Sun,Pengcheng Zhang
出处
期刊:Isprs Journal of Photogrammetry and Remote Sensing
日期:2019-05-01
卷期号:151: 91-105
被引量:177
标识
DOI:10.1016/j.isprsjprs.2019.02.019
摘要
Automated extraction of buildings from remotely sensed data is important for a wide range of applications but challenging due to difficulties in extracting semantic features from complex scenes like urban areas. The recently developed fully convolutional neural networks (FCNs) have shown to perform well on urban object extraction because of the outstanding feature learning and end-to-end pixel labeling abilities. The commonly used feature fusion or skip-connection refine modules of FCNs often overlook the problem of feature selection and could reduce the learning efficiency of the networks. In this paper, we develop an end-to-end trainable gated residual refinement network (GRRNet) that fuses high-resolution aerial images and LiDAR point clouds for building extraction. The modified residual learning network is applied as the encoder part of GRRNet to learn multi-level features from the fusion data and a gated feature labeling (GFL) unit is introduced to reduce unnecessary feature transmission and refine classification results. The proposed model - GRRNet is tested in a publicly available dataset with urban and suburban scenes. Comparison results illustrated that GRRNet has competitive building extraction performance in comparison with other approaches. The source code of the developed GRRNet is made publicly available for studies.
科研通智能强力驱动
Strongly Powered by AbleSci AI