计算机科学
架空(工程)
联合学习
GSM演进的增强数据速率
趋同(经济学)
边缘设备
互联网
人工智能
机器学习
分布式计算
数据科学
万维网
经济增长
云计算
操作系统
经济
作者
Mingzhe Chen,Nir Shlezinger,H. Vincent Poor,Yonina C. Eldar,Shuguang Cui
标识
DOI:10.1073/pnas.2024789118
摘要
Significance Federated learning (FL) is an emerging paradigm that enables multiple devices to collaborate in training machine learning (ML) models without having to share their possibly private data. FL requires a multitude of devices to frequently exchange their learned model updates, thus introducing significant communication overhead, which imposes a major challenge in FL over realistic networks that are limited in computational and communication resources. In this article, we propose a communication-efficient FL framework that enables edge devices to efficiently train and transmit model parameters, thus significantly improving FL performance and convergence speed. Our proposed FL framework paves the way to collaborative ML in large-scale networking systems such as Internet of Things networks.
科研通智能强力驱动
Strongly Powered by AbleSci AI