过度拟合
计算机科学
人工智能
模式识别(心理学)
域适应
熵(时间箭头)
机器学习
领域(数学分析)
分类器(UML)
数学
人工神经网络
量子力学
物理
数学分析
作者
Chunmei He,Xiuguang Li,Xia Yue,Jing Tang,Jie Yang,Zhengchun Ye
出处
期刊:IEEE Transactions on Circuits and Systems for Video Technology
[Institute of Electrical and Electronics Engineers]
日期:2023-07-20
卷期号:34 (3): 1532-1545
被引量:5
标识
DOI:10.1109/tcsvt.2023.3296617
摘要
Partial domain adaptation (PDA) assumes that target domain class label set is a subset of that of source domain, while this problem setting is close to the actual scenario. At present, there are mainly two methods to solve the overfitting of source domain in PDA, namely the entropy minimization and the weighted self-training. However, the entropy minimization method may make the distribution prediction sharp but inaccurate for samples with relatively average prediction distribution, and cause the model to learn more error information. While the weighted self-training method will introduce erroneous noise information in the self-training process due to the existence of noise weights. Therefore, we address these issues in our work and propose self-training contrastive partial domain adaptation method (STCPDA). We present two modules to mine domain information in STCPDA. We first design self-training module based on simple samples in target domain to address the overfitting to source domain. We divide the target domain samples into simple samples with high reliability and difficult samples with low reliability, and the pseudo-labels of simple samples are selected for self-training learning. Then we construct the contrastive learning module for source and target domains. We embed contrastive learning into feature space of the two domains. By this contrastive learning module, we can fully explore the hidden information in all domain samples and make the class boundary more salient. Many experimental results on five datasets show the effectiveness and excellent classification performance of our method.
科研通智能强力驱动
Strongly Powered by AbleSci AI