计算机科学
MNIST数据库
云计算
人工智能
人工神经网络
机器学习
边缘计算
边缘设备
分布式计算
架空(工程)
深度学习
GSM演进的增强数据速率
操作系统
作者
Ke Li,Kexun Chen,Shouxi Luo,H. H. Zhang,Pingzhi Fan
出处
期刊:IEEE Transactions on Network Science and Engineering
[Institute of Electrical and Electronics Engineers]
日期:2023-01-01
卷期号:: 1-17
被引量:5
标识
DOI:10.1109/tnse.2023.3260566
摘要
Deployment of distributed machine learning at the edge is conducive to reducing latency and protecting privacy associated with transmitting data back to the cloud. Nonetheless, as machine learning models scale, bandwidth resources are limited and heterogeneity in data collection from heterogeneous edge devices is visible, resulting in low model accuracy and high communication overhead. In this paper, a novel distributed deep learning framework called ubiquitous neural network (UbiNN) is proposed to improve communication efficiency without affecting the accuracy of the local neural network model at the edge or the global neural network model in the cloud. As for the accuracy of the neural network model, a common dataset with a small portion of insensitive data is constructed for training the neural network model, and its accuracy is enhanced by a new algorithm based on knowledge distillation and covariance computation (KDCC). Experimental results demonstrate that the test accuracy of UbiNN is extremely close to the centralized machine learning and non-federated learning scheme DDNN and up to 18.74% better than that of other classic federated learning schemes when using public datasets such as MNIST, CIFAR-10, and REUTERS-21578. Meanwhile, communication overheads are substantially reduced in terms of data transmission volume and latency.
科研通智能强力驱动
Strongly Powered by AbleSci AI