数学
梯度升压
梯度下降
Boosting(机器学习)
回归
缩小
逻辑回归
数学优化
应用数学
人工智能
统计
人工神经网络
计算机科学
随机森林
出处
期刊:Annals of Statistics
[Institute of Mathematical Statistics]
日期:2001-10-01
卷期号:29 (5)
被引量:21482
标识
DOI:10.1214/aos/1013203451
摘要
Function estimation/approximation is viewed from the perspective of numerical optimization in function space, rather than parameter space. A connection is made between stagewise additive expansions and steepest-descent minimization. A general gradient descent “boosting” paradigm is developed for additive expansions based on any fitting criterion.Specific algorithms are presented for least-squares, least absolute deviation, and Huber-M loss functions for regression, and multiclass logistic likelihood for classification. Special enhancements are derived for the particular case where the individual additive components are regression trees, and tools for interpreting such “TreeBoost” models are presented. Gradient boosting of regression trees produces competitive, highly robust, interpretable procedures for both regression and classification, especially appropriate for mining less than clean data. Connections between this approach and the boosting methods of Freund and Shapire and Friedman, Hastie and Tibshirani are discussed.
科研通智能强力驱动
Strongly Powered by AbleSci AI