计算机科学
分类器(UML)
模式识别(心理学)
域适应
边距(机器学习)
离群值
学习迁移
水准点(测量)
人工智能
不变(物理)
领域(数学分析)
数据挖掘
机器学习
数学
数学分析
数学物理
地理
大地测量学
作者
Cheng Feng,Chaoliang Zhong,Jie Wang,Jun Sun,Yasuto Yokota
出处
期刊:Conference on Information and Knowledge Management
日期:2021-10-26
卷期号:: 464-473
被引量:2
标识
DOI:10.1145/3459637.3482238
摘要
Unsupervised domain adaptation (UDA) methods aim to transfer knowledge from a labeled source domain to an unlabeled target domain. Most existing UDA methods try to learn domain-invariant features so that the classifier trained by the source labels can automatically be adapted to the target domain. However, recent works have shown the limitations of these methods when label distributions differ between the source and target domains. Especially, in partial domain adaptation (PDA) where the source domain holds plenty of individual labels (private labels) not appeared in the target domain, the domain-invariant features can cause catastrophic performance degradation. In this paper, based on the originally favorable underlying structures of the two domains, we learn two kinds of target features, i.e., the source-approximate features and target-approximate features instead of the domain-invariant features. The source-approximate features utilize the consistency of the two domains to estimate the distribution of the source private labels. The target-approximate features enhance the feature discrimination in the target domain while detecting the hard (outlier) target samples. A novel Coupled Approximation Neural Network (CANN) has been proposed to co-train the source-approximate and target-approximate features by two parallel sub-networks without sharing the parameters. We apply CANN to three prevalent transfer learning benchmark datasets, Office-Home, Office-31, and Visda2017 with both UDA and PDA settings. The results show that CANN outperforms all baselines by a large margin in PDA and also performs best in UDA.
科研通智能强力驱动
Strongly Powered by AbleSci AI