计算机科学
学习迁移
领域(数学分析)
注释
相似性(几何)
任务(项目管理)
人工智能
机器学习
负迁移
知识转移
终身学习
标记数据
传输(计算)
数据挖掘
图像(数学)
知识管理
工程类
数学分析
系统工程
并行计算
第一语言
哲学
心理学
语言学
数学
教育学
作者
Wen Zhang,Lingfei Deng,Lei Zhang,Dongrui Wu
出处
期刊:IEEE/CAA Journal of Automatica Sinica
[Institute of Electrical and Electronics Engineers]
日期:2022-11-04
卷期号:10 (2): 305-329
被引量:118
标识
DOI:10.1109/jas.2022.106004
摘要
Transfer learning (TL) utilizes data or knowledge from one or more source domains to facilitate the learning in a target domain. It is particularly useful when the target domain has very few or no labeled data, due to annotation expense, privacy concerns, etc. Unfortunately, the effectiveness of TL is not always guaranteed. Negative transfer (NT), i.e., leveraging source domain data/knowledge undesirably reduces the learning performance in the target domain, has been a long-standing and challenging problem in TL. Various approaches have been proposed in the literature to handle it. However, there does not exist a systematic survey on the formulation of NT, the factors leading to NT, and the algorithms that mitigate NT. This paper fills this gap, by first introducing the definition of NT and its factors, then reviewing about fifty representative approaches for overcoming NT, according to four categories: secure transfer, domain similarity estimation, distant transfer, and NT mitigation. NT in related fields, e.g., multi-task learning, lifelong learning, and adversarial attacks, are also discussed.
科研通智能强力驱动
Strongly Powered by AbleSci AI