差别隐私
计算机科学
信息隐私
隐私软件
方案(数学)
订单(交换)
计算机安全
噪音(视频)
等级制度
私人信息检索
数据挖掘
人工智能
市场经济
数学
图像(数学)
数学分析
经济
财务
标识
DOI:10.1145/3573834.3574544
摘要
Federated learning is a privacy preserving machine learning technology. Each participant can build the model without disclosing the underlying data, and only shares the weight update and gradient information of the model with the server. However, a lot of work shows that the attackers can easily obtain the client's contributions and the relevant privacy training data from the public shared gradient, so the gradient exchange is no longer safe. In order to ensure the security of Federated learning, in the differential privacy method, noise is added to the model update to obscure the contribution of the client, thereby resisting member reasoning attacks, preventing malicious clients from knowing other client information, and ensuring private output. This paper proposes a new differential privacy aggregation scheme, which adopts a more fine-grained hierarchy update strategy. For the first time, the f-differential privacy (f-DP) method is used for the privacy analysis of federated aggregation. Adding Gaussian noise disturbance model update in order to protect the privacy of the client level. We prove that the f-DP differential privacy method improves the previous privacy analysis by experiments. It accurately captures the loss of privacy at every communication round in federal training, and overcome the problem of ensuring privacy at the cost of reducing model utility in most previous work. At the same time, it provides a federal model updating scheme with wider applicability and better utility. When enough users participate in federated learning, the client-level privacy guarantee is achieved while minimizing model loss.
科研通智能强力驱动
Strongly Powered by AbleSci AI