过度拟合
稳健性(进化)
计算机科学
人工神经网络
情态动词
离群值
人工智能
水准点(测量)
均方误差
机器学习
一般化
模式识别(心理学)
算法
数据挖掘
数学
统计
数学分析
生物化学
化学
大地测量学
高分子化学
基因
地理
作者
Liangxuan Zhu,Han Li,Wen Wen,Lingjuan Wu,Hong Chen
标识
DOI:10.1109/ijcnn54540.2023.10191062
摘要
Neural networks have been successfully applied in numerous domains with the help of high-quality training samples. However, datasets containing noises and outliers (i.e., corrupted samples) are ubiquitous in the real world. When using these datasets as training samples, most neural networks exhibit poor predictive performance. In this paper, motivated by the modal regression, we propose a Modal Neural Network, which is robust to corrupted samples. Specifically, the modal neural network can reveal the most likely trends of training samples without overfitting the corrupted samples. On the theoretical side, we establish the generalization error bounds of the proposed method with Rademacher complexity. On the experimental side, the numerical results demonstrate our method yields substantial effectiveness and robustness to different levels of corruption on both synthetic and real-world benchmark datasets. Furthermore, our method, as a plug-and-play algorithm, can be readily applied to most neural network architectures and optimizers.
科研通智能强力驱动
Strongly Powered by AbleSci AI