计算机科学
趋同(经济学)
编配
数据压缩
收敛速度
压缩(物理)
分布式计算
数据压缩比
软件部署
实时计算
人工智能
计算机网络
图像压缩
频道(广播)
艺术
音乐剧
材料科学
图像(数学)
复合材料
图像处理
经济
视觉艺术
经济增长
操作系统
作者
Ye Xue,Liqun Su,Vincent K. N. Lau
出处
期刊:IEEE Internet of Things Journal
[Institute of Electrical and Electronics Engineers]
日期:2022-04-06
卷期号:9 (19): 19330-19345
被引量:15
标识
DOI:10.1109/jiot.2022.3165268
摘要
Federated learning (FL) is a machine learning framework, where multiple distributed edge Internet of Things (IoT) devices collaboratively train a model under the orchestration of a central server while keeping the training data distributed on the IoT devices. FL can mitigate the privacy risks and costs from data collection in traditional centralized machine learning. However, the deployment of standard FL is hindered by the expense of the communication of the gradients from the devices to the server. Hence, many gradient compression methods have been proposed to reduce the communication cost. However, the existing methods ignore the structural correlations of the gradients and, therefore, lead to a large compression loss which will decelerate the training convergence. Moreover, many of the existing compression schemes do not enable over-the-air aggregation and, hence, require huge communication resources. In this work, we propose a gradient compression scheme, named FedOComp, which leverages the correlations of the stochastic gradients in FL systems for efficient compression of the high-dimension gradients with over-the-air aggregation. The proposed design can achieve a smaller deceleration of the training convergence compared to other gradient compression methods since the compression kernel exploits the structural correlations of the gradients. It also directly enables over-the-air aggregation to save communication resources. The derived convergence analysis and simulation results further illustrate that under the same power cost, the proposed scheme has a much faster convergence rate and higher test accuracy compared to existing baselines.
科研通智能强力驱动
Strongly Powered by AbleSci AI