计算机科学
正确性
服务器
架空(工程)
趋同(经济学)
边缘计算
信息隐私
GSM演进的增强数据速率
过程(计算)
分布式计算
计算机网络
计算机安全
人工智能
算法
操作系统
经济
经济增长
作者
Hao Zhou,Geng Yang,Hua Dai,Guoxiu Liu
标识
DOI:10.1109/tifs.2022.3174394
摘要
Federated learning (FL) can protect clients’ privacy from leakage in distributed machine learning. Applying federated learning to edge computing can protect the privacy of edge clients and benefit edge computing. Nevertheless, eavesdroppers can analyze the parameter information to specify clients’ private information and model features. And it is difficult to achieve a high privacy level, convergence, and low communication overhead during the entire process in the FL framework. In this paper, we propose a novel privacy-preserving federated learning framework for edge computing (PFLF). In PFLF, each client and the application server add noise before sending the data. To protect the privacy of clients, we design a flexible arrangement mechanism to count the optimal training times for clients. We prove that PFLF guarantees the privacy of clients and servers during the entire training process. Then, we theoretically prove that PFLF has three main properties: 1) For a given privacy level and model aggregation times, there is an optimal number of participating times for clients; 2) There is an upper and lower bound of convergence; 3) PFLF achieves low communication overhead by designing a flexible participation training mechanism. Simulation experiments confirm the correctness of our theoretical analysis. Therefore, PFLF helps design a framework to balance privacy levels and convergence and achieve low communication overhead when there is a part of clients dropping out of training.
科研通智能强力驱动
Strongly Powered by AbleSci AI