计算机科学
人工智能
分割
领域(数学分析)
GSM演进的增强数据速率
模式识别(心理学)
自然语言处理
无监督学习
机器学习
数学
数学分析
作者
Mengyuan Yang,Rui Yang,Shikang Tao,Xin Zhang,Min Wang
出处
期刊:Neural Networks
[Elsevier]
日期:2024-07-30
卷期号:179: 106581-106581
被引量:1
标识
DOI:10.1016/j.neunet.2024.106581
摘要
Unsupervised domain adaptation (UDA) is a weakly supervised learning technique that classifies images in the target domain when the source domain has labeled samples, and the target domain has unlabeled samples. Due to the complexity of imaging conditions and the content of remote sensing images, the use of UDA to accurately extract artificial features such as buildings from high-spatial-resolution (HSR) imagery is still challenging. In this study, we propose a new UDA method for building extraction, the contrastive domain adaptation network (CDANet), by utilizing adversarial learning and contrastive learning techniques. CDANet consists of a single multitask generator and dual discriminators. The generator employs a region and edge dual-branch structure that strengthens its edge extraction ability and is beneficial for the extraction of small and densely distributed buildings. The dual discriminators receive the region and edge prediction outputs and achieve multilevel adversarial learning. During adversarial training processing, CDANet aligns the cross-domain of similar pixel features in the embedding space by constructing the regional pixelwise contrastive loss. A self-training (ST) strategy based on pseudolabel generation is further utilized to address the target intradomain discrepancy. Comprehensive experiments are conducted to validate CDANet on three publicly accessible datasets, namely the WHU, Austin, and Massachusetts. Ablation experiments show that the generator network structure, contrastive loss and ST strategy all improve the building extraction accuracy. Method comparisons validate that CDANet achieves superior performance to several state-of-the-art methods, including AdaptSegNet, AdvEnt, IntraDA, FDANet and ADRS, in terms of F1 score and mIoU.
科研通智能强力驱动
Strongly Powered by AbleSci AI