FedOComp: Two-Timescale Online Gradient Compression for Over-the-Air Federated Learning

计算机科学 趋同(经济学) 编配 数据压缩 收敛速度 压缩(物理) 分布式计算 数据压缩比 软件部署 实时计算 人工智能 计算机网络 图像压缩 频道(广播) 艺术 视觉艺术 图像处理 复合材料 经济 经济增长 材料科学 图像(数学) 音乐剧 操作系统
作者
Ye Xue,Liqun Su,Vincent K. N. Lau
出处
期刊:IEEE Internet of Things Journal [Institute of Electrical and Electronics Engineers]
卷期号:9 (19): 19330-19345 被引量:15
标识
DOI:10.1109/jiot.2022.3165268
摘要

Federated learning (FL) is a machine learning framework, where multiple distributed edge Internet of Things (IoT) devices collaboratively train a model under the orchestration of a central server while keeping the training data distributed on the IoT devices. FL can mitigate the privacy risks and costs from data collection in traditional centralized machine learning. However, the deployment of standard FL is hindered by the expense of the communication of the gradients from the devices to the server. Hence, many gradient compression methods have been proposed to reduce the communication cost. However, the existing methods ignore the structural correlations of the gradients and, therefore, lead to a large compression loss which will decelerate the training convergence. Moreover, many of the existing compression schemes do not enable over-the-air aggregation and, hence, require huge communication resources. In this work, we propose a gradient compression scheme, named FedOComp, which leverages the correlations of the stochastic gradients in FL systems for efficient compression of the high-dimension gradients with over-the-air aggregation. The proposed design can achieve a smaller deceleration of the training convergence compared to other gradient compression methods since the compression kernel exploits the structural correlations of the gradients. It also directly enables over-the-air aggregation to save communication resources. The derived convergence analysis and simulation results further illustrate that under the same power cost, the proposed scheme has a much faster convergence rate and higher test accuracy compared to existing baselines.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
勤恳的从波完成签到,获得积分20
2秒前
余南发布了新的文献求助10
2秒前
虚幻身影完成签到 ,获得积分10
2秒前
打打应助科研通管家采纳,获得10
2秒前
q1356478314应助科研通管家采纳,获得10
2秒前
小蘑菇应助科研通管家采纳,获得20
2秒前
Hello应助科研通管家采纳,获得10
3秒前
SYLH应助科研通管家采纳,获得10
3秒前
所所应助科研通管家采纳,获得10
3秒前
Lucas应助科研通管家采纳,获得10
3秒前
科研通AI2S应助科研通管家采纳,获得10
3秒前
3秒前
3秒前
3秒前
1111应助科研通管家采纳,获得10
3秒前
3秒前
3秒前
3秒前
华仔应助科研通管家采纳,获得10
3秒前
3秒前
彭于晏应助科研通管家采纳,获得10
3秒前
英俊的铭应助Amanda采纳,获得10
4秒前
中和皇极应助11111采纳,获得10
4秒前
大白完成签到 ,获得积分10
5秒前
5秒前
等风吹完成签到,获得积分20
6秒前
6秒前
7秒前
kkt完成签到,获得积分10
7秒前
一见憘完成签到 ,获得积分10
8秒前
8秒前
大白关注了科研通微信公众号
8秒前
陈隆发布了新的文献求助10
10秒前
小马甲应助rudjs采纳,获得10
12秒前
祎橘发布了新的文献求助10
12秒前
jyy发布了新的文献求助200
12秒前
12秒前
顾矜应助GGbound采纳,获得10
13秒前
万能图书馆应助尊敬寒松采纳,获得10
14秒前
14秒前
高分求助中
The Mother of All Tableaux: Order, Equivalence, and Geometry in the Large-scale Structure of Optimality Theory 3000
Social Research Methods (4th Edition) by Maggie Walter (2019) 1030
A new approach to the extrapolation of accelerated life test data 1000
Indomethacinのヒトにおける経皮吸収 400
基于可调谐半导体激光吸收光谱技术泄漏气体检测系统的研究 370
Phylogenetic study of the order Polydesmida (Myriapoda: Diplopoda) 370
Robot-supported joining of reinforcement textiles with one-sided sewing heads 320
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 遗传学 基因 物理化学 催化作用 冶金 细胞生物学 免疫学
热门帖子
关注 科研通微信公众号,转发送积分 3993430
求助须知:如何正确求助?哪些是违规求助? 3534082
关于积分的说明 11264604
捐赠科研通 3273901
什么是DOI,文献DOI怎么找? 1806170
邀请新用户注册赠送积分活动 883026
科研通“疑难数据库(出版商)”最低求助积分说明 809662