计算机科学
判别式
人工智能
联合概率分布
分歧(语言学)
领域(数学分析)
域适应
机器学习
模式识别(心理学)
公制(单位)
特征学习
特征(语言学)
数学
分类器(UML)
统计
数学分析
哲学
语言学
经济
运营管理
作者
Tian Qiu,Jiazhong Zhou,Yi Chu
标识
DOI:10.1016/j.knosys.2022.108903
摘要
An important challenge of unsupervised domain adaptation (UDA) is how to sufficiently utilize the structure and information of the data distribution, so as to exploit the source domain knowledge for a more accurate classification of the unlabeled target domain. Currently, much research work has been devoted to UDA. However, existing works have mostly considered only distribution alignment or learning domain invariant features by adversarial techniques, ignoring feature processing and intra-domain category information. To this end, we design a new cross-domain discrepancy metric, namely joint distribution for maximum mean discrepancy (JD-MMD), and propose a deep unsupervised domain adaptation learning method, namely joint bi-adversarial learning for unsupervised domain adaptation (JBL-UDA). Specifically, JD-MMD measures cross-domain divergence in terms of both discrepancy and relevance by preserving cross-domain joint distribution discrepancy, as well as their class discriminability. Then, with such divergence measure, JBL-UDA models with two learning modalities, one is founded by the bi-adversarial learning from domains and classes implicitly, while the other explicitly addresses domains and classes alignment via the JD-MMD metric. Besides, JBL-UDA explores structural prior knowledge from data classes and domains to generate class-discriminative and domain-invariant representations. Finally, extensive evaluations exhibit state-of-the-art accuracy of the proposed methodology.
科研通智能强力驱动
Strongly Powered by AbleSci AI