FedMask

计算机科学 移动设备 深度学习 人工智能 带宽(计算) 计算 人工神经网络 二进制数 深层神经网络 分布式计算 移动计算 数据建模 机器学习 计算机工程 计算机网络 算法 算术 数据库 操作系统 数学
作者
Ang Li,Jingwei Sun,Xiao Zeng,Mi Zhang,Hai Li,Yiran Chen
标识
DOI:10.1145/3485730.3485929
摘要

Recent advancements in deep neural networks (DNN) enabled various mobile deep learning applications. However, it is technically challenging to locally train a DNN model due to limited data on devices like mobile phones. Federated learning (FL) is a distributed machine learning paradigm which allows for model training on decentralized data residing on devices without breaching data privacy. Hence, FL becomes a natural choice for deploying on-device deep learning applications. However, the data residing across devices is intrinsically statistically heterogeneous (i.e., non-IID data distribution) and mobile devices usually have limited communication bandwidth to transfer local updates. Such statistical heterogeneity and communication bandwidth limit are two major bottlenecks that hinder applying FL in practice. In addition, considering mobile devices usually have limited computational resources, improving computation efficiency of training and running DNNs is critical to developing on-device deep learning applications. In this paper, we present FedMask - a communication and computation efficient FL framework. By applying FedMask, each device can learn a personalized and structured sparse DNN, which can run efficiently on devices. To achieve this, each device learns a sparse binary mask (i.e., 1 bit per network parameter) while keeping the parameters of each local model unchanged; only these binary masks will be communicated between the server and the devices. Instead of learning a shared global model in classic FL, each device obtains a personalized and structured sparse model that is composed by applying the learned binary mask to the fixed parameters of the local model. Our experiments show that compared with status quo approaches, FedMask improves the inference accuracy by 28.47% and reduces the communication cost and the computation cost by 34.48X and 2.44X. FedMask also achieves 1.56X inference speedup and reduces the energy consumption by 1.78X.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
大幅提高文件上传限制,最高150M (2024-4-1)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
爱吃菠萝蜜完成签到,获得积分10
2秒前
啦啦啦完成签到,获得积分10
3秒前
Puby完成签到 ,获得积分10
3秒前
7秒前
jevon完成签到 ,获得积分0
8秒前
顾矜应助putao采纳,获得30
10秒前
香菜小土狗完成签到 ,获得积分10
12秒前
MAVS完成签到,获得积分10
12秒前
独孤完成签到 ,获得积分10
12秒前
歪比巴卜发布了新的文献求助10
12秒前
Mr.Su完成签到 ,获得积分10
15秒前
Hello应助hs采纳,获得10
16秒前
上官若男应助小马采纳,获得10
18秒前
情怀应助清新的翠采纳,获得10
18秒前
SPQR完成签到,获得积分10
19秒前
hiha完成签到,获得积分10
19秒前
科研通AI2S应助单薄惜文采纳,获得10
19秒前
科研通AI2S应助单薄惜文采纳,获得10
19秒前
科目三应助单薄惜文采纳,获得10
19秒前
思源应助单薄惜文采纳,获得10
19秒前
priss111应助吃猫的鱼采纳,获得50
19秒前
vikoel完成签到,获得积分10
19秒前
shiyi完成签到 ,获得积分10
20秒前
20秒前
22秒前
22秒前
乳酪蚊完成签到,获得积分10
23秒前
ding应助nini采纳,获得10
24秒前
不安的橘子完成签到 ,获得积分10
25秒前
飞哥发布了新的文献求助10
26秒前
26秒前
27秒前
27秒前
成就的南霜完成签到,获得积分10
27秒前
歪比巴卜完成签到 ,获得积分10
28秒前
需要交流的铅笔完成签到 ,获得积分10
28秒前
28秒前
天天完成签到,获得积分10
31秒前
hs发布了新的文献求助10
31秒前
烟花应助Cico采纳,获得10
34秒前
高分求助中
The late Devonian Standard Conodont Zonation 2000
Semiconductor Process Reliability in Practice 1500
歯科矯正学 第7版(或第5版) 1004
Nickel superalloy market size, share, growth, trends, and forecast 2023-2030 1000
Smart but Scattered: The Revolutionary Executive Skills Approach to Helping Kids Reach Their Potential (第二版) 1000
PraxisRatgeber: Mantiden: Faszinierende Lauerjäger 700
中国区域地质志-山东志 560
热门求助领域 (近24小时)
化学 医学 生物 材料科学 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 基因 遗传学 催化作用 物理化学 免疫学 量子力学 细胞生物学
热门帖子
关注 科研通微信公众号,转发送积分 3242069
求助须知:如何正确求助?哪些是违规求助? 2886379
关于积分的说明 8243158
捐赠科研通 2555019
什么是DOI,文献DOI怎么找? 1383200
科研通“疑难数据库(出版商)”最低求助积分说明 649672
邀请新用户注册赠送积分活动 625417