人工智能
稳健性(进化)
可扩展性
计算机科学
模式识别(心理学)
错误发现率
数据挖掘
人工神经网络
机器学习
化学
生物化学
数据库
基因
作者
Yikai Wang,Yanwei Fu,Xinwei Sun
标识
DOI:10.1109/tpami.2023.3338268
摘要
A noisy training set usually leads to the degradation of the generalization and robustness of neural networks. In this article, we propose a novel theoretically guaranteed clean sample selection framework for learning with noisy labels. Specifically, we first present a Scalable Penalized Regression (SPR) method, to model the linear relation between network features and one-hot labels. In SPR, the clean data are identified by the zero mean-shift parameters solved in the regression model. We theoretically show that SPR can recover clean data under some conditions. Under general scenarios, the conditions may be no longer satisfied; and some noisy data are falsely selected as clean data. To solve this problem, we propose a data-adaptive method for Scalable Penalized Regression with Knockoff filters (Knockoffs-SPR), which is provable to control the False-Selection-Rate (FSR) in the selected clean data. To improve the efficiency, we further present a split algorithm that divides the whole training set into small pieces that can be solved in parallel to make the framework scalable to large datasets. While Knockoffs-SPR can be regarded as a sample selection module for a standard supervised training pipeline, we further combine it with a semi-supervised algorithm to exploit the support of noisy data as unlabeled data. Experimental results on several benchmark datasets and real-world noisy datasets show the effectiveness of our framework and validate the theoretical results of Knockoffs-SPR.
科研通智能强力驱动
Strongly Powered by AbleSci AI