Boosting(机器学习)
阿达布思
计算机科学
人工智能
机器学习
集成学习
梯度升压
算法
分类器(UML)
决策树
随机森林
k-最近邻算法
训练集
模式识别(心理学)
作者
Yoav Freund,Robert E. Schapire
出处
期刊:International Conference on Machine Learning
日期:1996-07-03
卷期号:: 148-156
被引量:7483
摘要
In an earlier paper, we introduced a new algorithm called AdaBoost which, theoretically, can be used to significantly reduce the error of any learning algorithm that con- sistently generates classifiers whose performance is a little better than random guessing. We also introduced the related notion of a pseudo-loss which is a method for forcing a learning algorithm of multi-label concepts to concentrate on the labels that are hardest to discriminate. In this paper, we describe experiments we carried out to assess how well AdaBoost with and without pseudo-loss, performs on real learning problems. We performed two sets of experiments. The first set compared boosting to Breiman's bagging method when used to aggregate various classifiers (including decision trees and single attribute- value tests). We compared the performance of the two methods on a collection of machine-learning benchmarks. In the second set of experiments, we studied in more detail the performance of boosting using a nearest-neighbor classifier on an OCR problem.
科研通智能强力驱动
Strongly Powered by AbleSci AI