感应转移
学习迁移
计算机科学
机器学习
人工智能
贝叶斯概率
参数统计
基于实例的学习
负迁移
透视图(图形)
算法
任务(项目管理)
主动学习(机器学习)
数学
机器人学习
管理
第一语言
机器人
经济
移动机器人
统计
语言学
哲学
作者
Xuetong Wu,Jonathan H. Manton,Uwe Aickelin,Jingge Zhu
标识
DOI:10.1016/j.artint.2023.103991
摘要
Transfer learning is a machine learning paradigm where knowledge from one problem is utilized to solve a new but related problem. While conceivable that knowledge from one task could help solve a related task, if not executed properly, transfer learning algorithms can impair the learning performance instead of improving it – commonly known as negative transfer. In this paper, we use a parametric statistical model to study transfer learning from a Bayesian perspective. Specifically, we study three variants of transfer learning problems, instantaneous, online, and time-variant transfer learning. We define an appropriate objective function for each problem and provide either exact expressions or upper bounds on the learning performance using information-theoretic quantities, which allow simple and explicit characterizations when the sample size becomes large. Furthermore, examples show that the derived bounds are accurate even for small sample sizes. The obtained bounds give valuable insights into the effect of prior knowledge on transfer learning, at least with respect to our Bayesian formulation of the transfer learning problem. In particular, we formally characterize the conditions under which negative transfer occurs. Lastly, we devise several (online) transfer learning algorithms that are amenable to practical implementations, some of which do not require the parametric assumption. We demonstrate the effectiveness of our algorithms with real data sets, focusing primarily on when the source and target data have strong similarities.
科研通智能强力驱动
Strongly Powered by AbleSci AI