计算机科学
域适应
变更检测
遥感
计算机视觉
数字表面
人工智能
领域(数学分析)
适应(眼睛)
数字高程模型
模式识别(心理学)
激光雷达
地质学
数学分析
物理
数学
分类器(UML)
光学
作者
Yuxing Xie,Xiangtian Yuan,Xiao Xiang Zhu,Jiaojiao Tian
出处
期刊:IEEE Transactions on Geoscience and Remote Sensing
[Institute of Electrical and Electronics Engineers]
日期:2024-01-01
卷期号:62: 1-20
标识
DOI:10.1109/tgrs.2024.3362680
摘要
In this article, we propose a multimodal co-learning framework for building change detection. This framework can be adopted to jointly train a Siamese bitemporal image network and a height difference map (HDiff) network with labeled source data and unlabeled target data pairs. Three co-learning combinations (vanilla co-learning, fusion co-learning, and detached fusion co-learning) are proposed and investigated with two types of co-learning loss functions within our framework. Our experimental results demonstrate that the proposed methods are able to take advantage of unlabeled target data pairs and therefore enhance the performance of single-modal neural networks on the target data. In addition, our synthetic-to-real experiments demonstrate that the recently published synthetic dataset SMARS is feasible to be used in real change detection scenarios, where the optimal result is with the F1 score of 79.29%.
科研通智能强力驱动
Strongly Powered by AbleSci AI