合成孔径雷达
分割
接头(建筑物)
人工智能
计算机科学
计算机视觉
融合
遥感
传感器融合
深度学习
情态动词
模式识别(心理学)
工程类
地质学
哲学
语言学
建筑工程
化学
高分子化学
作者
Xue Li,Guo Zhang,Hao Cui,Shasha Hou,Yujia Chen,Zhijiang Li,Haifeng Li,Huabin Wang
出处
期刊:Isprs Journal of Photogrammetry and Remote Sensing
日期:2023-01-01
卷期号:195: 178-191
被引量:4
标识
DOI:10.1016/j.isprsjprs.2022.11.015
摘要
Automatic and high-precision extraction of buildings from remote sensing images has a wide range of application and importance. Optical and synthetic aperture radar (SAR) images are typical types of multimodal remote sensing data with different imaging methods. To bridge the huge gap between them and achieve high-precision joint semantic segmentation, this study proposes a progressive fusion learning framework. The framework explicitly extracts the shared features (that is, modal invariants) of multimodal images as the information medium and realizes information fusion through multistage learning. Based on this framework, we design a network called the multistage multimodal fusion network (MMFNet), which uses phase as a modal invariant to joint optical and SAR images to achieve high-precision building extraction. We conducted experiments with the Multi-Sensor All-Weather Mapping aerial dataset and the WHU-OPT-SAR_WuHan satellite dataset. This study shows MMFNet has a significant extraction effect and yields more optimized extraction of the edge details of buildings, which is improved by 0.2% to 9.5% compared to other multimodal joint segmentation methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI