计算机科学
接头(建筑物)
域适应
人工智能
联合概率分布
稳健性(进化)
领域(数学分析)
模式识别(心理学)
数据挖掘
统计
数学
工程类
生物化学
分类器(UML)
基因
数学分析
建筑工程
化学
作者
Zhiming Cheng,Shuai Wang,Defu Yang,Jie Qi,Mang Xiao,Chenggang Yan
标识
DOI:10.1016/j.patcog.2024.110409
摘要
Multi-source Unsupervised Domain Adaptation (MUDA) transfers knowledge learned from multiple labeled source domains to an unlabeled target domain by minimizing the domain shift between multiple source domains and the target domain. Recent studies on MUDA have focused on aligning the distribution of each pair of source and target domains in separate feature spaces to reduce their domain shift. However, these approaches suffer from two main shortcomings. First, they usually focus on the global domain shift which lacks consideration of the joint distribution of category-corresponded subdomains. Second, out-of-distribution samples far from the sample center are hard to align by the global domain alignment. Therefore, we propose a novel Deep Joint Semantic Adaptive Network (DJSAN) for MUDA. Specifically, a new maximum mean discrepancy-based metric, Joint Semantic Maximum Mean Discrepancy (JSMMD), is proposed, which can uniformly optimize the cross-domain joint distribution of category-corresponded subdomains on multiple task-specific layers. Moreover, to deal with the out-of-distribution hard samples, we propose an across-domain data augmentation method called Source-Target Domain Mixing (STDMix) to enhance the robustness of the model, which synthesizes the source domain and target domain into a new domain at a fixed ratio and utilizes information entropy to provide reliable pseudo-labels for samples in the target domain. Experimental results on three public datasets, i.e., Office-31, Digits-five, and Office-Home, show that our proposed method achieves improvements of 0.3%, 1.8%, and 2.7% in average accuracy, respectively.
科研通智能强力驱动
Strongly Powered by AbleSci AI