计算机科学
样品(材料)
选择偏差
人工智能
选择(遗传算法)
机器学习
取样偏差
模式识别(心理学)
数据挖掘
样本量测定
统计
数学
化学
色谱法
作者
Huafeng Liu,Mengmeng Sheng,Zeren Sun,Yazhou Yao,Xian‐Sheng Hua,Heng Tao Shen
标识
DOI:10.1109/tmm.2024.3368910
摘要
Learning with noisy labels has gained increasing attention because the inevitable imperfect labels in real-world scenarios can substantially hurt the deep model performance. Recent studies tend to regard low-loss samples as clean ones and discard high-loss ones to alleviate the negative impact of noisy labels. However, real-world datasets contain not only noisy labels but also class imbalance. The imbalance issue is prone to causing failure in the loss-based sample selection since the under-learning of tail classes also leans to produce high losses. To this end, we propose a simple yet effective method to address noisy labels in imbalanced datasets. Specifically, we propose C lass- B alance-based sample S election ( CBS ) to prevent the tail class samples from being neglected during training. We propose C onfidence-based S ample A ugmentation ( CSA ) for the chosen clean samples to enhance their reliability in the training process. To exploit selected noisy samples, we resort to prediction history to rectify labels of noisy samples. Moreover, we introduce the A verage C onfidence M argin (ACM) metric to measure the quality of corrected labels by leveraging the model's evolving training dynamics, thereby ensuring that low-quality corrected noisy samples are appropriately masked out. Lastly, consistency regularization is imposed on filtered label-corrected noisy samples to boost model performance. Comprehensive experimental results on synthetic and real-world datasets demonstrate the effectiveness and superiority of our proposed method, especially in imbalanced scenarios. The source code has been made available at https://github.com/NUST-Machine-Intelligence-Laboratory/CBS .
科研通智能强力驱动
Strongly Powered by AbleSci AI