Boosting(机器学习)
计算机科学
机器学习
随机森林
集成学习
人工智能
工作流程
范畴变量
软件
决策树
梯度升压
算法
数据挖掘
数据库
程序设计语言
作者
Sergio González,Salvador García,Javier Del Ser,Lior Rokach,Francisco Herrera
标识
DOI:10.1016/j.inffus.2020.07.007
摘要
Ensembles, especially ensembles of decision trees, are one of the most popular and successful techniques in machine learning. Recently, the number of ensemble-based proposals has grown steadily. Therefore, it is necessary to identify which are the appropriate algorithms for a certain problem. In this paper, we aim to help practitioners to choose the best ensemble technique according to their problem characteristics and their workflow. To do so, we revise the most renowned bagging and boosting algorithms and their software tools. These ensembles are described in detail within their variants and improvements available in the literature. Their online-available software tools are reviewed attending to the implemented versions and features. They are categorized according to their supported programming languages and computing paradigms. The performance of 14 different bagging and boosting based ensembles, including XGBoost, LightGBM and Random Forest, is empirically analyzed in terms of predictive capability and efficiency. This comparison is done under the same software environment with 76 different classification tasks. Their predictive capabilities are evaluated with a wide variety of scenarios, such as standard multi-class problems, scenarios with categorical features and big size data. The efficiency of these methods is analyzed with considerably large data-sets. Several practical perspectives and opportunities are also exposed for ensemble learning.
科研通智能强力驱动
Strongly Powered by AbleSci AI