计算机科学
学习迁移
人工智能
机器学习
任务(项目管理)
多任务学习
领域(数学)
先验与后验
转化(遗传学)
感应转移
数据挖掘
数学
机器人学习
哲学
生物化学
化学
管理
机器人
认识论
移动机器人
纯数学
经济
基因
作者
Bo Liu,Liangjiao Li,Yanshan Xiao,Kai Wang,Jian Hu,Junrui Liu,Qihang Chen,Ruiguang Huang
出处
期刊:ACM Transactions on Knowledge Discovery From Data
[Association for Computing Machinery]
日期:2023-09-06
卷期号:18 (1): 1-23
摘要
Transfer learning (TL) is an information reuse learning tool, which can help us learn better classification effect than traditional single task learning, because transfer learning can share information within the task-to-task model. Most TL algorithms are studied in the field of data improvement, doing some data extraction and transformation. However, it ignores that existing the additional information to improve the model’s accuracy, like Universum samples in the training data with privileged information. In this article, we focus on considering prior data to improve the TL algorithm, and the additional features also called privileged information are incorporated into the learning to improve the learning paradigm. In addition, we also carry out the Universum samples which do not belong to any indicated categories into the transfer learning paradigm to improve the utilization of prior knowledge. We propose a new TL Model (PU-TLSVM), in which each task with corresponding privileged features and Universum data is considered in the proposed model, so as to apply tasks with a priori data to the training stage. Then, we use Lagrange duality theorem to optimize our model to obtain the optimal discriminant for target task classification. Finally, we make a lot of predictions and tests to compare the actual effectiveness of the proposed method with the previous methods. The experiment results indicate that the proposed method is more effective and robust than other baselines.
科研通智能强力驱动
Strongly Powered by AbleSci AI