计算机科学
上传
瓶颈
加密
同态加密
无线网络
MNIST数据库
无线
个性化
数据压缩
人工神经网络
数据挖掘
计算机网络
人工智能
分布式计算
机器学习
嵌入式系统
电信
万维网
操作系统
作者
Xi Zhu,Junbo Wang,Wuhui Chen,Kento Sato
标识
DOI:10.1016/j.future.2022.10.026
摘要
Federated learning (FL) as a collaborative learning paradigm has attracted extensive attention due to its characteristic of privacy preserving, in which the clients train a shared neural network model collaboratively by their local dataset and upload their model parameters merely instead of original data by wireless network in the whole training process. Because FL reduces transmission significantly, it can further meets the efficiency and security of the next generation wireless system. Although FL has reduced the size of information that needs to be transmitted, the update of model parameters still suffers from privacy leakage and communication bottleneck especially in wireless networks. To address the problem of privacy and communication, this paper proposes a model compression based FL framework. Firstly, the designed model compression framework provides effective support for efficient and secure model parameters updating in FL while keeping the personalization of all clients. Then, the proposed perturbed model compression method can further reduce the size of the model and protect the privacy of the model without sacrificing much accuracy. Besides, it also facilitates the simultaneous execution of decryption and decompressing operations by reconstruction algorithm on encrypted and compressed model parameters which is obtained by the proposed perturbed model compression method. Finally, the illustrative results demonstrate that the proposed model compression based FL framework can significantly reduce the number of model parameters for uploading with a strong privacy preservation property. For example, when the compression ratio is 0.0953 (i.e., only 9.53% of the parameters are uploaded), the accuracy of MNIST achieves 97% while the accuracy without compression is 98%.
科研通智能强力驱动
Strongly Powered by AbleSci AI