梯度升压
计算机科学
Boosting(机器学习)
人工智能
机器学习
回归
扩展(谓词逻辑)
任务(项目管理)
数学
随机森林
统计
管理
经济
程序设计语言
作者
Seyedsaman Emami,Carlos Ruiz Pastor,Gonzalo Martínez-Muñoz
标识
DOI:10.1007/978-3-031-40725-3_9
摘要
Gradient Boosting Machines (GBMs) have revealed outstanding proficiency in various machine learning applications, such as classification and regression. Gradient boosting builds a set of regression models in an iterative process, in which at each iteration, a regressor model is trained to reduce a given loss on a given objective. This paper proposes an extension of gradient boosting that can handle multi-task problems, that is, problems in which the tasks share the attribute space but not necessarily the data distribution. The objective of the proposed algorithm is to split the GB process into two phases, one in which the base models learn the multiple interconnected tasks simultaneously, and a second one, in which different models are built to optimize the loss function on each task. The performance of proposed model shows a better overall performance with respect to models that learn the tasks independently and all tasks together in several multi-task regression and classification problems.
科研通智能强力驱动
Strongly Powered by AbleSci AI