不变(物理)
人工智能
模式识别(心理学)
域适应
稳健性(进化)
数学
规范化(社会学)
作者
Sentao Chen,Mehrtash Harandi,Xiaona Jin,Xiaowei Yang
出处
期刊:IEEE Transactions on Image Processing
日期:2020-08-05
卷期号:29: 8264-8277
被引量:12
标识
DOI:10.1109/tip.2020.3013167
摘要
Domain adaptation addresses the learning problem where the training data are sampled from a source joint distribution (source domain), while the test data are sampled from a different target joint distribution (target domain). Because of this joint distribution mismatch, a discriminative classifier naively trained on the source domain often generalizes poorly to the target domain. In this article, we therefore present a Joint Distribution Invariant Projections (JDIP) approach to solve this problem. The proposed approach exploits linear projections to directly match the source and target joint distributions under the $L^{2}$ -distance. Since the traditional kernel density estimators for distribution estimation tend to be less reliable as the dimensionality increases, we propose a least square method to estimate the $L^{2}$ -distance without the need to estimate the two joint distributions, leading to a quadratic problem with analytic solution. Furthermore, we introduce a kernel version of JDIP to account for inherent nonlinearity in the data. We show that the proposed learning problems can be naturally cast as optimization problems defined on the product of Riemannian manifolds. To be comprehensive, we also establish an error bound, theoretically explaining how our method works and contributes to reducing the target domain generalization error. Extensive empirical evidence demonstrates the benefits of our approach over state-of-the-art domain adaptation methods on several visual data sets.
科研通智能强力驱动
Strongly Powered by AbleSci AI