子空间拓扑
域适应
对抗制
领域(数学分析)
计算机科学
人工智能
切线
适应(眼睛)
模式识别(心理学)
数学
生物
几何学
数学分析
神经科学
分类器(UML)
作者
Christoph Raab,Manuel Röder,Frank-Michael Schleif
标识
DOI:10.1016/j.neucom.2022.07.074
摘要
Deep learning is reaching state of the art in many applications. However, the generalization capabilities of the learned networks are limited to the training or source domain. The predictive power decreases when these models are evaluated in a target domain different from the source domain. Joint adversarial domain adaptation networks are currently the preferred models for source-to-target domain adaptation due to their good empirical performance. These models simultaneously learn a classifier, an invariant representation through an adversarial min–max game, and adapt local structures between domains. For the latter, it is common practice to incorporate pseudo labels that can be, however, unreliable due to false predictions on challenging tasks. This work proposes the Domain Adversarial Tangent Subspace Alignment (DATSA) network, which models data as affine subspaces and adversarially aligns local approximations of manifolds across domains. DATSA addresses the drawbacks of the joint adversarial domain adaptation networks by not requiring pseudo labels for local alignment because it relies on self-supervised learning for subspace alignment. Additionally, DATSA adaptations are explainable to some extent and the results show that they are competitive to other models in terms of accuracy.
科研通智能强力驱动
Strongly Powered by AbleSci AI