计算机科学
机器学习
人工智能
杠杆(统计)
多任务学习
任务(项目管理)
一般化
核(代数)
高斯过程
支持向量机
核方法
深度学习
多核学习
高斯分布
数学
数学分析
物理
管理
组合数学
量子力学
经济
作者
Carlos Ruiz Pastor,Carlos M. Alaíz,José R. Dorronsoro
标识
DOI:10.1016/j.neucom.2024.127255
摘要
Multi-Task Learning (MTL) seeks to leverage the learning process of several tasks by solving them simultaneously to arrive at better models. This advantage is obtained by coupling the tasks together so that paths to share information among them are created. While Deep learning models have successfully been applied to MTL in different fields, the performance of deep approaches often depends on using large amounts of data to fit complex models with many parameters, something which may not be always feasible or, simply, they may lack some advantages that other approaches have. Kernel methods, such as Support Vector Machines or Gaussian Processes, offer characteristics such as a better generalization ability or the availability of uncertainty estimations, that may make them more suitable for small to medium size datasets. As a consequence, kernel-based MTL methods stand out among these alternative approaches to deep models and there also exists a rich literature on them. In this paper we review these kernel-based multi-task approaches, group them according to a taxonomy we propose, link some of them to foundational work on machine learning, and comment on datasets commonly used in their study and on relevant applications that use them.
科研通智能强力驱动
Strongly Powered by AbleSci AI