计算机科学
RNA剪接
人工智能
机器学习
剪接
公制(单位)
领域(数学分析)
试验装置
选择性拼接
数学
遗传学
生物
基因
信使核糖核酸
数学分析
经济
核糖核酸
运营管理
作者
Lianting Hu,Huiying Liang,Long Lu
标识
DOI:10.1016/j.ins.2020.11.028
摘要
In recent years, among most approaches for few-shot learning, there exists a default premise that a big homogeneous-annotated dataset is applied to pre-train the few-shot learning model. However, since few-shot learning approaches are always used in the domain where annotated samples are rare, it would be difficult to collect another big annotated dataset in the same domain. Therefore, we propose Splicing Learning to complete the few-shot learning task without the help of a big homogeneous-annotated dataset. Splicing Learning can increase the sample size of the few-shot set by splicing multiple original images to a spliced-image. Unlike data augmentation technologies, there is no false information on the spliced-image. Through experiments, we find that the configuration "All-splice + WSG" can achieve the best test accuracy of 90.81%, 9.19% better than the baseline. The performance improvement of the model can be attributed to Splicing Learning mostly and has little to do with the complexity of the CNN framework. Compared with metric learning, meta-learning, and GAN models, both of Splicing Learning and data augmentation have achieved more outstanding performance. At the same time, the combination of Splicing Learning and data augmentation can further improve the test accuracy of the model to 96.33%. The full implementation is available at https://github.com/xiangxiangzhuyi/Splicing-learning.
科研通智能强力驱动
Strongly Powered by AbleSci AI