计算机科学
班级(哲学)
领域(数学分析)
可靠性(半导体)
随机性
人工智能
光学(聚焦)
适应(眼睛)
域适应
机器学习
数学
数学分析
功率(物理)
统计
物理
量子力学
分类器(UML)
光学
作者
Hui Wang,Liangli Zheng,Hanbin Zhao,Shijian Li,Xi Li
出处
期刊:IEEE transactions on neural networks and learning systems
[Institute of Electrical and Electronics Engineers]
日期:2024-02-29
卷期号:35 (7): 9930-9942
被引量:2
标识
DOI:10.1109/tnnls.2023.3238063
摘要
Unsupervised domain adaptation (UDA) is to make predictions on unlabeled target domain by learning the knowledge from a label-rich source domain. In practice, existing UDA approaches mainly focus on minimizing the discrepancy between different domains by mini-batch training, where only a few instances are accessible at each iteration. Due to the randomness of sampling, such a batch-level alignment pattern is unstable and may lead to misalignment. To alleviate this risk, we propose class-aware memory alignment (CMA) that models the distributions of the two domains by two auxiliary class-aware memories and performs domain adaptation on these predefined memories. CMA is designed with two distinct characteristics: class-aware memories that create two symmetrical class-aware distributions for different domains and two reliability-based filtering strategies that enhance the reliability of the constructed memory. We further design a unified memory-based loss to jointly improve the transferability and discriminability of features in the memories. State-of-the-art (SOTA) comparisons and careful ablation studies show the effectiveness of our proposed CMA.
科研通智能强力驱动
Strongly Powered by AbleSci AI