一般化
跳跃式监视
计算机科学
可用的
选型
选择(遗传算法)
领域(数学)
机器学习
人工智能
估计
算法
理论计算机科学
数学
数学分析
万维网
经济
管理
纯数学
摘要
How can we select the best performing data‐driven model? How can we rigorously estimate its generalization error? Statistical learning theory (SLT) answers these questions by deriving nonasymptotic bounds on the generalization error of a model or, in other words, by delivering upper bounding of the true error of the learned model based just on quantities computed on the available data. However, for a long time, SLT has been considered only as an abstract theoretical framework, useful for inspiring new learning approaches, but with limited applicability to practical problems. The purpose of this review is to give an intelligible overview of the problems of model selection (MS) and error estimation (EE), by focusing on the ideas behind the different SLT‐based approaches and simplifying most of the technical aspects with the purpose of making them more accessible and usable in practice. We start by presenting the seminal works of the 80s until the most recent results, then discuss open problems and finally outline future directions of this field of research. This article is categorized under: Algorithmic Development > Statistics
科研通智能强力驱动
Strongly Powered by AbleSci AI