计算机科学
公制(单位)
机器学习
人工智能
运营管理
经济
作者
Bo Fu,Zhangjie Cao,Mingsheng Long,Jianmin Wang
标识
DOI:10.1007/978-3-030-58555-6_34
摘要
Universal domain adaptation (UniDA) transfers knowledge between domains without any constraint on the label sets, extending the applicability of domain adaptation in the wild. In UniDA, both the source and target label sets may hold individual labels not shared by the other domain. A de facto challenge of UniDA is to classify the target examples in the shared classes against the domain shift. A more prominent challenge of UniDA is to mark the target examples in the target-individual label set (open classes) as “unknown”. These two entangled challenges make UniDA a highly under-explored problem. Previous work on UniDA focuses on the classification of data in the shared classes and uses per-class accuracy as the evaluation metric, which is badly biased to the accuracy of shared classes. However, accurately detecting open classes is the mission-critical task to enable real universal domain adaptation. It further turns UniDA problem into a well-established close-set domain adaptation problem. Towards accurate open class detection, we propose Calibrated Multiple Uncertainties (CMU) with a novel transferability measure estimated by a mixture of uncertainty quantities in complementation: entropy, confidence and consistency, defined on conditional probabilities calibrated by a multi-classifier ensemble model. The new transferability measure accurately quantifies the inclination of a target example to the open classes. We also propose a novel evaluation metric called H-score, which emphasizes the importance of both accuracies of the shared classes and the “unknown” class. Empirical results under the UniDA setting show that CMU outperforms the state-of-the-art domain adaptation methods on all the evaluation metrics, especially by a large margin on the H-score.
科研通智能强力驱动
Strongly Powered by AbleSci AI