人工智能
计算机科学
深度学习
卷积神经网络
域适应
核(代数)
机器学习
嵌入
领域(数学分析)
特征学习
人工神经网络
模式识别(心理学)
学习迁移
理论计算机科学
特征(语言学)
数学
分类器(UML)
组合数学
数学分析
哲学
语言学
作者
Mingsheng Long,Yue Cao,Zhangjie Cao,Jianmin Wang,Michael I. Jordan
标识
DOI:10.1109/tpami.2018.2868685
摘要
Domain adaptation studies learning algorithms that generalize across source domains and target domains that exhibit different distributions. Recent studies reveal that deep neural networks can learn transferable features that generalize well to similar novel tasks. However, as deep features eventually transition from general to specific along the network, feature transferability drops significantly in higher task-specific layers with increasing domain discrepancy. To formally reduce the effects of this discrepancy and enhance feature transferability in task-specific layers, we develop a novel framework for deep adaptation networks that extends deep convolutional neural networks to domain adaptation problems. The framework embeds the deep features of all task-specific layers into reproducing kernel Hilbert spaces (RKHSs) and optimally matches different domain distributions. The deep features are made more transferable by exploiting low-density separation of target-unlabeled data in very deep architectures, while the domain discrepancy is further reduced via the use of multiple kernel learning that enhances the statistical power of kernel embedding matching. The overall framework is cast in a minimax game setting. Extensive empirical evidence shows that the proposed networks yield state-of-the-art results on standard visual domain-adaptation benchmarks.
科研通智能强力驱动
Strongly Powered by AbleSci AI