计算机科学
管道(软件)
人工智能
卷积神经网络
变更检测
深度学习
建筑
语义学(计算机科学)
萃取(化学)
地理
程序设计语言
色谱法
化学
考古
作者
Cheng Liao,Han Hu,Xuekun Yuan,Haifeng Li,Chao Liu,Chunyang Liu,Gui Fu,Yulin Ding,Qing Zhu
出处
期刊:Isprs Journal of Photogrammetry and Remote Sensing
日期:2023-05-31
卷期号:201: 138-152
被引量:9
标识
DOI:10.1016/j.isprsjprs.2023.05.011
摘要
Automatic and periodic recompiling of building databases with up-to-date high-resolution images has become a critical requirement for rapidly developing urban environments. However, the architecture of most existing approaches for change extraction attempts to learn features related to changes but ignores objectives related to buildings. This inevitably leads to the generation of significant pseudo-changes, due to factors such as seasonal changes in images and the inclination of building façades. To alleviate the above-mentioned problems, we developed a contrastive learning approach by validating historical building footprints against single up-to-date remotely sensed images. This contrastive learning strategy allowed us to inject the semantics of buildings into a pipeline for the detection of changes, which is achieved by increasing the distinguishability of features of buildings from those of non-buildings. In addition, to reduce the effects of inconsistencies between historical building polygons and buildings in up-to-date images, we employed a deformable convolutional neural network to learn offsets intuitively. In summary, we formulated a multi-branch building extraction method that identifies newly constructed and removed buildings, respectively. To validate our method, we conducted comparative experiments using the public Wuhan University building change detection dataset and a more practical dataset named SI-BU that we established. Our method achieved F1 scores of 93.99% and 70.74% on the above datasets, respectively. Moreover, when the data of the public dataset were divided in the same manner as in previous related studies, our method achieved an F1 score of 94.63%, which surpasses that of the state-of-the-art method. Code and datasets are available at https://vrlab.org.cn/~hanhu/projects/bcenet.
科研通智能强力驱动
Strongly Powered by AbleSci AI