对抗制
域适应
计算机科学
人工智能
适应(眼睛)
集合(抽象数据类型)
领域(数学分析)
开放集
计算机视觉
模式识别(心理学)
机器学习
分类器(UML)
数学
心理学
数学分析
离散数学
神经科学
程序设计语言
作者
Juepeng Zheng,Yan Wen,Mengxuan Chen,Shuai Yuan,Weijia Li,Yuchao Zhao,Wencheng Wu,Lixian Zhang,Runmin Dong,Haohuan Fu
出处
期刊:Isprs Journal of Photogrammetry and Remote Sensing
日期:2024-02-01
卷期号:208: 245-260
标识
DOI:10.1016/j.isprsjprs.2024.01.015
摘要
Domain adaptation methods are able to transfer knowledge across different domains, tackling multi-sensor, multi-temporal or cross-regional remote sensing scenarios as they do not rely on labels or annotations in the target domain. However, most of the previous studies have focused on closed-set domain adaptation, based on the assumption that the source and target domains share identical class labels. Real-world scenarios are typically more complex, and the model could potentially encounter novel classes that are not previously included in the source domain, commonly referred to as “unknown” classes. Here we investigate the open-set domain adaptation scenario in the field of remote sensing scene classification, where there is a partial overlap between the label space of the target domain and that of the source domain. To deal with this problem, we propose a novel open-set domain adaptation method for scene classification using remote sensing images, which is named Multi-Adversarial Open-Set Domain Adaptation Network (MAOSDAN). Our proposed MAOSDAN consists of three main components. First, we employ an attention-aware Open Set BackPropagation (OSBP) to better distinguish the “unknown” and “known” samples for the target domain. Then, an auxiliary adversarial learning is designed for mitigating the negative transfer effect that arises from forcefully aligning the “unknown” target sample in network training. Finally, we adopt an adaptive entropy suppression to increase the probability of samples and prevent some samples from being mistakenly classified. Our proposed MAOSDAN achieves an average score of 75.07% in three publicly available remote sensing datasets, which significantly outperforms other open-set domain adaptation algorithms by attaining 4.52∼17.15%. In addition, MAOSDAN surpasses the baseline deep learning model with 18.12% improvement. A comprehensive experimental evaluation demonstrates that our MAOSDAN shows promising prospects in addressing practical and general domain adaptation scenarios, especially in scenarios where the label set of the source domain is a subset of the target domain.
科研通智能强力驱动
Strongly Powered by AbleSci AI