学习迁移
分类
计算机科学
领域(数学分析)
透视图(图形)
机器学习
人工智能
特征向量
标记数据
特征(语言学)
钥匙(锁)
空格(标点符号)
感应转移
数学
操作系统
机器人
数学分析
哲学
语言学
机器人学习
计算机安全
移动机器人
作者
Nidhi Agarwal,Akanksha Sondhi,Khyati Chopra,Ghanapriya Singh
出处
期刊:Advances in intelligent systems and computing
日期:2020-08-01
卷期号:: 145-155
被引量:72
标识
DOI:10.1007/978-981-15-5345-5_13
摘要
A key notion in numerous data mining and machine learning (ML) algorithms says that the training data and testing data are essentially in the similar feature space and also have the alike probability distribution function (PDF). Though, in several real-life applications, this theory might not retain true. There are issues where training data is costly or tough to gather. Thus, there is a necessity to build high-performance classifiers, trained using more commonly found data from distinct domains. This methodology is stated as transfer learning (TL). TL is usually beneficial when enough data is not available in the target domain but the large dataset is available in source domain. This survey paper explains transfer learning along with its categorization and provides examples and perspective related to transfer learning. Negative transfer learning is also discussed in detail along with its effects on the accomplishment of learning in target domain.
科研通智能强力驱动
Strongly Powered by AbleSci AI