计算机科学
初始化
人工智能
人工神经网络
原始数据
深度学习
实施
机器学习
深层神经网络
程序设计语言
作者
Michael Baucum,Daniel Belotto,Sayre Jeannet,Eric Savage,Prannoy Mupparaju,Carlos Morato
标识
DOI:10.1145/3094243.3094247
摘要
Our research focuses on a new data flow architecture in neural network training called Continuous Neural Network Learning (CNNL) whose main objective is the reduction of data required to train a neural network. In real-world applications, much of the raw data used in deep learning algorithms do not have a large labeled datasets readily available for training. CNNL seeks to allow for more efficient neural network implementations by significantly reducing the necessary size of the labeled dataset and secondarily decreasing the processing and training time required to achieve reasonable accuracy. Not only is a CNNL system shown to be able to achieve impressive results with little tuning on standardized datasets, but the initialization is as low as 150 images. While this research only the first step and requires further refinement for real world application, it proves the potential for a CNNL system.
科研通智能强力驱动
Strongly Powered by AbleSci AI