离群值
极限学习机
稳健性(进化)
稳健回归
计算机科学
凸优化
算法
一般化
正多边形
凸函数
回归
人工智能
机器学习
数学优化
数学
统计
生物化学
人工神经网络
基因
数学分析
化学
几何学
作者
Kuaini Wang,Huimin Pei,Jinde Cao,Ping Zhong
标识
DOI:10.1016/j.jfranklin.2020.05.027
摘要
Extreme learning machine (ELM) is considered as a powerful data-driven modeling method and has been widely used to various practical fields. It relies on the assumption that samples are completely clean without noise or worst yet. However, this is often not the case in the real-world applications, and results in poor robustness. In this paper, we focus on addressing a key issue of inefficiency in ELM when confronting with outliers. Introducing the non-convex loss function, we propose a robust regularized extreme learning machine for regression by difference of convex functions (DC) program, denoted as RRELM. The proposed non-convex loss function sets a constant penalty on any large outliers to suppress their negative effects, and can be decomposed into the difference of two convex functions. The RRELM can be successfully solved by DC optimization. Numerical experiments were conducted on various datasets to examine the validity of RRELM. Each experiment was randomly contaminated with 0%, 10%, 20%, 30% and 40% outliers levels in the training samples. We also applied RRELM to the financial time series datasets prediction. The experimental results verify that the proposed RRELM can yield superior generalization performance. Moreover, it is less affected with the increasing proportions of outliers than the competing method.
科研通智能强力驱动
Strongly Powered by AbleSci AI