对抗制
计算机科学
熵(时间箭头)
领域(数学分析)
公制(单位)
特征(语言学)
人工智能
域适应
分歧(语言学)
理论计算机科学
数学
分类器(UML)
物理
数学分析
哲学
量子力学
经济
语言学
运营管理
作者
Xiaohan Huang,Xuesong Wang,Qiang Yu,Yuhu Cheng
出处
期刊:IEEE Transactions on Cognitive and Developmental Systems
[Institute of Electrical and Electronics Engineers]
日期:2022-12-01
卷期号:14 (4): 1440-1448
被引量:1
标识
DOI:10.1109/tcds.2021.3104231
摘要
Domain adaptation (DA) refers to generalize a learning technique across the source domain and target domain under different distributions. Therefore, the essential problem in DA is how to reduce the distribution discrepancy between the source and target domains. Typical methods are to embed the adversarial learning technique into deep networks to learn transferable feature representations. However, existing adversarial related DA methods may not sufficiently minimize the distribution discrepancy. In this article, a DA method minimum adversarial distribution discrepancy (MADD) is proposed by combining feature distribution with adversarial learning. Specifically, we design a novel divergence metric loss, named maximum mean discrepancy based on conditional entropy (MMD-CE), and embed it in the adversarial DA network. The proposed MMD-CE loss can address two problems: 1) the misalignment from different class distributions between domains and 2) the equilibrium challenge issue in adversarial DA. Comparative experiments on Office-31, ImageCLEF-DA, and Office-Home data sets with state-of-the-art methods show that our method has some advantageous performances.
科研通智能强力驱动
Strongly Powered by AbleSci AI