自举(财务)
机器学习
计算机科学
选择(遗传算法)
交叉验证
人工智能
差异(会计)
选型
算法
数据挖掘
数学
计量经济学
会计
业务
出处
期刊:Cornell University - arXiv
日期:2018-11-13
被引量:563
标识
DOI:10.48550/arxiv.1811.12808
摘要
The correct use of model evaluation, model selection, and algorithm selection\ntechniques is vital in academic machine learning research as well as in many\nindustrial settings. This article reviews different techniques that can be used\nfor each of these three subtasks and discusses the main advantages and\ndisadvantages of each technique with references to theoretical and empirical\nstudies. Further, recommendations are given to encourage best yet feasible\npractices in research and applications of machine learning. Common methods such\nas the holdout method for model evaluation and selection are covered, which are\nnot recommended when working with small datasets. Different flavors of the\nbootstrap technique are introduced for estimating the uncertainty of\nperformance estimates, as an alternative to confidence intervals via normal\napproximation if bootstrapping is computationally feasible. Common\ncross-validation techniques such as leave-one-out cross-validation and k-fold\ncross-validation are reviewed, the bias-variance trade-off for choosing k is\ndiscussed, and practical tips for the optimal choice of k are given based on\nempirical evidence. Different statistical tests for algorithm comparisons are\npresented, and strategies for dealing with multiple comparisons such as omnibus\ntests and multiple-comparison corrections are discussed. Finally, alternative\nmethods for algorithm selection, such as the combined F-test 5x2\ncross-validation and nested cross-validation, are recommended for comparing\nmachine learning algorithms when datasets are small.\n
科研通智能强力驱动
Strongly Powered by AbleSci AI