计算机科学
领域(数学分析)
人工智能
组分(热力学)
学习迁移
二部图
对抗制
域适应
领域工程
机器学习
图形
理论计算机科学
模式识别(心理学)
数学
分类器(UML)
数学分析
物理
基于构件的软件工程
软件
软件系统
热力学
程序设计语言
作者
Chang’an Yi,Haotian Chen,Yonghui Xu,Huanhuan Chen,Yong Liu,Haishu Tan,Yuguang Yan,Han Yu
出处
期刊:IEEE transactions on neural networks and learning systems
[Institute of Electrical and Electronics Engineers]
日期:2023-05-24
卷期号:34 (10): 6824-6838
被引量:3
标识
DOI:10.1109/tnnls.2023.3270359
摘要
Domain adaptation (DA) aims to transfer knowledge from one source domain to another different but related target domain. The mainstream approach embeds adversarial learning into deep neural networks (DNNs) to either learn domain-invariant features to reduce the domain discrepancy or generate data to fill in the domain gap. However, these adversarial DA (ADA) approaches mainly consider the domain-level data distributions, while ignoring the differences among components contained in different domains. Therefore, components that are not related to the target domain are not filtered out. This can cause a negative transfer. In addition, it is difficult to make full use of the relevant components between the source and target domains to enhance DA. To address these limitations, we propose a general two-stage framework, named multicomponent ADA (MCADA). This framework trains the target model by first learning a domain-level model and then fine-tuning that model at the component-level. In particular, MCADA constructs a bipartite graph to find the most relevant component in the source domain for each component in the target domain. Since the nonrelevant components are filtered out for each target component, fine-tuning the domain-level model can enhance positive transfer. Extensive experiments on several real-world datasets demonstrate that MCADA has significant advantages over state-of-the-art methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI