计算机科学
过度拟合
集成学习
Boosting(机器学习)
阿达布思
人工智能
机器学习
分类器(UML)
贝叶斯概率
人工神经网络
模式识别(心理学)
标识
DOI:10.1007/3-540-45014-9_1
摘要
Ensemble methods are learning algorithms that construct a set of classifiers and then classify new data points by taking a (weighted) vote of their predictions. The original ensemble method is Bayesian averaging, but more recent algorithms include error-correcting output coding, Bagging, and boosting. This paper reviews these methods and explains why ensembles can often perform better than any single classifier. Some previous studies comparing ensemble methods are reviewed, and some new experiments are presented to uncover the reasons that Adaboost does not overfit rapidly.
科研通智能强力驱动
Strongly Powered by AbleSci AI