计算机科学
源代码
领域(数学分析)
域适应
学习迁移
多源
卷积(计算机科学)
编码(集合论)
适应(眼睛)
光学(聚焦)
有效域
传输(计算)
人工智能
集合(抽象数据类型)
数学
程序设计语言
数学分析
物理
光学
凸优化
并行计算
正多边形
凸组合
统计
分类器(UML)
人工神经网络
几何学
作者
Yunsheng Li,Yuan Liu,Yinpeng Chen,Pei Wang,Nuno Vasconcelos
出处
期刊:Computer Vision and Pattern Recognition
日期:2021-06-01
被引量:38
标识
DOI:10.1109/cvpr46437.2021.01085
摘要
Recent works of multi-source domain adaptation focus on learning a domain-agnostic model, of which the parameters are static. However, such a static model is difficult to handle conflicts across multiple domains, and suffers from a performance degradation in both source domains and target domain. In this paper, we present dynamic transfer to address domain conflicts, where the model parameters are adapted to samples. The key insight is that adapting model across domains is achieved via adapting model across samples. Thus, it breaks down source domain barriers and turns multi-source domains into a single-source domain. This also simplifies the alignment between source and target domains, as it only requires the target domain to be aligned with any part of the union of source domains. Furthermore, we find dynamic transfer can be simply modeled by aggregating residual matrices and a static convolution matrix. Experimental results show that, without using domain labels, our dynamic transfer outperforms the state-of-the-art method by more than 3% on the large multi-source domain adaptation datasets – DomainNet. Source code is at https://github.com/liyunsheng13/DRT.
科研通智能强力驱动
Strongly Powered by AbleSci AI