领域(数学分析)
计算机科学
平滑度
有界函数
域适应
人工智能
空格(标点符号)
算法
模式识别(心理学)
拓扑(电路)
数学
组合数学
分类器(UML)
操作系统
数学分析
作者
Weikai Li,Songcan Chen
标识
DOI:10.1109/tpami.2022.3228937
摘要
Unsupervised domain adaptation (UDA) aims to transfer knowledge from a well-labeled source domain to a related and unlabeled target domain with identical label space. The main workhorse in UDA is domain alignment and has proven successful. However, it is practically difficult to find an appropriate source domain with identical label space. A more practical scenario is partial domain adaptation (PDA) where the source label space subsumes the target one. Unfortunately, due to the non-identity between label spaces, it is extremely hard to obtain an ideal alignment, conversely, easier resulting in mode collapse and negative transfer. These motivate us to find a relatively simpler alternative to solve PDA. To achieve this, we first explore a theoretical analysis, which says that the target risk is bounded by both model smoothness and between-domain discrepancy. Then, we instantiate the model smoothness as an intra-domain structure preserving (IDSP) while giving up possibly riskier domain alignment. To our best knowledge, this is the first naive attempt for PDA without alignment. Finally, our empirical results on benchmarks demonstrate that IDSP is not only superior to the PDA SOTAs (e.g., ∼ +10% on Cl → Rw and ∼ +8% on Ar → Rw), but also complementary to domain alignment in the standard UDA.
科研通智能强力驱动
Strongly Powered by AbleSci AI