判别式
计算机科学
人工智能
特征(语言学)
模式识别(心理学)
不变(物理)
域适应
领域(数学分析)
对抗制
机器学习
特征学习
数学
分类器(UML)
数学分析
哲学
语言学
数学物理
作者
Yi‐Ju Yang,Tianxiao Zhang,Guanyu Li,Taejoon Kim,Guanghui Wang
标识
DOI:10.1016/j.neucom.2021.12.060
摘要
In this paper, we propose a dual-module network architecture that employs a domain discriminative feature module to encourage the domain invariant feature module to learn more domain invariant features. The proposed architecture can be applied to any model that utilizes domain invariant features for unsupervised domain adaptation to improve its ability to extract domain invariant features. We conduct experiments with the Domain-Adversarial Training of Neural Networks (DANN) model as a representative algorithm. In the training process, we supply the same input to the two modules and then extract their feature distribution and prediction results respectively. We propose a discrepancy loss to find the discrepancy of the prediction results and the feature distribution between the two modules. Through the adversarial training by maximizing the loss of their feature distribution and minimizing the discrepancy of their prediction results, the two modules are encouraged to learn more domain discriminative and domain invariant features respectively. Extensive comparative evaluations are conducted and the proposed approach outperforms the state-of-the-art in most unsupervised domain adaptation tasks.
科研通智能强力驱动
Strongly Powered by AbleSci AI