极限学习机
计算机科学
矩阵范数
一般化
算法
人工神经网络
提前停车
秩(图论)
反向
多元随机变量
可靠性(半导体)
摩尔-彭罗斯伪逆
基质(化学分析)
上下界
人工智能
随机变量
数学
特征向量
功率(物理)
统计
数学分析
物理
几何学
材料科学
量子力学
组合数学
复合材料
标识
DOI:10.1016/j.neunet.2023.04.014
摘要
Ensuring the prediction accuracy of a learning algorithm on a theoretical basis is crucial and necessary for building the reliability of the learning algorithm. This paper analyzes prediction error obtained through the least square estimation in the generalized extreme learning machine (GELM), which applies the limiting behavior of the Moore-Penrose generalized inverse (M-P GI) to the output matrix of ELM. ELM is the random vector functional link (RVFL) network without direct input to output links Specifically, we analyze tail probabilities associated with upper and lower bounds to the error expressed by norms. The analysis employs the concepts of the L2 norm, the Frobenius norm, the stable rank, and the M-P GI. The coverage of theoretical analysis extends to the RVFL network. In addition, a criterion for more precise bounds of prediction errors that may give stochastically better network environments is provided. The analysis is applied to simple examples and large-size datasets to illustrate the procedure and verify the analysis and execution speed with big data. Based on this study, we can immediately obtain the upper and lower bounds of prediction errors and their associated tail probabilities through matrices calculations appearing in the GELM and RVFL. This analysis provides criteria for the reliability of the learning performance of a network in real-time and for network structure that enables obtaining better performance reliability. This analysis can be applied in various areas where the ELM and RVFL are adopted. The proposed analytical method will guide the theoretical analysis of errors occurring in DNNs, which employ a gradient descent algorithm.
科研通智能强力驱动
Strongly Powered by AbleSci AI