计算机科学
过度拟合
学习迁移
人工智能
机器学习
一般化
领域(数学分析)
标记数据
训练集
试验数据
深度学习
感应转移
人工神经网络
机器人学习
机器人
程序设计语言
数学分析
数学
移动机器人
作者
Jianjun Su,Xuejiao Yu,Xiru Wang,Zhijin Wang,Guoqing Chao
标识
DOI:10.1016/j.engappai.2023.107602
摘要
Traditional machine learning methods require the assumption that training and test data are drawn from the same distribution, which proves challenging in real-world applications. Moreover, deep learning models require a substantial amount of labeled data for training in classification tasks and limited samples may lead to overfitting. In many real-world scenarios, there is an insufficient supply of labeled samples within the target domain for learning. Transfer learning offers an effective solution, allowing knowledge from a source domain to be transferred to a target domain. Additionally, data augmentation enhances model generalization by increasing data samples, particularly beneficial when dealing with limited target domain data. In this paper, we synergistically enhance the model's performance on classification tasks by integrating transfer learning techniques with a data augmentation strategy. By conducting numerous experiments across various datasets, we verified the effectiveness of our proposed approach.
科研通智能强力驱动
Strongly Powered by AbleSci AI