稳健性(进化)
计算机科学
噪音(视频)
深层神经网络
水准点(测量)
人工智能
趋同(经济学)
机器学习
路径(计算)
人工神经网络
深度学习
功能(生物学)
算法
基因
图像(数学)
生物
进化生物学
经济
化学
生物化学
程序设计语言
地理
经济增长
大地测量学
作者
Defu Liu,Ivor W. Tsang,Guowu Yang
标识
DOI:10.1109/tnnls.2022.3202752
摘要
In many real-world machine learning classification applications, the model performance based on deep neural networks (DNNs) oftentimes suffers from label noise. Various methods have been proposed in the literature to address this issue, primarily by focusing on designing noise-tolerant loss functions, cleaning label noise, and correcting the objective loss. However, the noise-tolerant loss functions face challenges when the noise level increases. This article aims to reveal a convergence path of a trained model in the presence of label noise, and here, the convergence path depicts the evolution of a trained model over epochs. We first propose a theorem to demonstrate that any surrogate loss function can be used to learn DNNs from noisy labels. Next, theories on the general convergence path for the deep models under label noise are presented and verified through a series of experiments. In addition, we design an algorithm based on the proposed theorems that make efficient corrections on the noisy labels and achieve strong robustness in the DNN models. We designed several experiments using benchmark datasets to assess noise tolerance and verify the theorems presented in this article. The comprehensive experimental results firmly confirm our theoretical results and also clearly validate the effectiveness of our method under various levels of label noise.
科研通智能强力驱动
Strongly Powered by AbleSci AI