计算机科学
联合学习
计算机安全
理论计算机科学
万维网
人工智能
作者
Pengyu Lu,Xianjia Meng,Ximeng Liu
标识
DOI:10.1007/978-981-99-9785-5_18
摘要
Federated learning emerged to solve the privacy leakage problem of traditional centralized machine learning methods. Although traditional federated learning updates the global model by updating the gradient, an attacker may still infer the model update through backward inference, which may lead to privacy leakage problems. In order to enhance the security of federated learning, we propose a solution to this challenge by presenting a multi-key Cheon-Kim-Kim-Song (CKKS) scheme for privacy protection in federated learning. Our approach can enable each participant to use local datasets for federated learning while maintaining data security and model accuracy, and we also introduce FedCMK, a more efficient and secure federated learning framework. FedCMK uses an improved client selection strategy to improve the training speed of the framework, redesigns the key aggregation process according to the improved client selection strategy, and proposes a scheme vMK-CKKS, to ensure the security of the framework within a certain threshold. In particular, the vMK-CKKS scheme adds a secret verification mechanism to prevent participants from malicious attacks through false information. The experiments show that our proposed vMK-CKKS schemes significantly improve security and efficiency compared with the previous encryption schemes. FedCMK reduces training time by 21 $$\%$$ on average while guaranteeing model accuracy, and it provides robustness by allowing participants to join or leave during the process.
科研通智能强力驱动
Strongly Powered by AbleSci AI