域适应
对偶(语法数字)
计算机科学
领域(数学分析)
适应(眼睛)
人工智能
模式识别(心理学)
心理学
数学
神经科学
分类器(UML)
文学类
数学分析
艺术
作者
Yifan Pan,Guibo Luo,Bairong Li,Yuesheng Zhu
标识
DOI:10.1109/icassp48485.2024.10446867
摘要
Unsupervised Domain Adaptation (UDA) deals with transferring knowledge from labeled source domains to unlabeled target domains. This addresses the challenge of different distributions across domains, commonly known as domain shift. Numerous methods attempt to align distributions across domains while learning the core tasks (e.g., classification) on source domain separately. However, limited research has explored the mutual influence between classification and domain alignment. In this paper, we discuss the conflicting optimization between domain alignment and classification tasks, emphasizing the risk of negative transfer due to conflicting optimization directions. For better optimization consistency, these tasks should concentrate on the common information of features. To address this issue, we propose an innovative framework Dual-attention between classification and Domain Alignment (DuDA). DuDA employs gradient-based saliency maps to generate interpretable attentions, concurrently enhancing both classification and domain alignment through a dual-attention mechanism. Experimental results verify the effectiveness of DuDA in mitigating negative transfer and its strong adaptability and promising performance.
科研通智能强力驱动
Strongly Powered by AbleSci AI