服务器
计算机科学
计算机安全
差别隐私
信息隐私
方案(数学)
联合学习
隐私保护
互联网隐私
计算机网络
人工智能
数据挖掘
数学分析
数学
作者
Junqing Le,Di Zhang,Xinyu Lei,Long Jiao,Kai Zeng,Xiaofeng Liao
标识
DOI:10.1109/tifs.2023.3295949
摘要
Federated learning (FL) enables multiple clients to jointly train a global learning model while keeping their training data locally, thereby protecting clients’ privacy. However, there still exist some security issues in FL, e.g., the honest-but-curious servers may mine privacy from clients’ model updates, and the malicious clients may launch poisoning attacks to disturb or break global model training. Moreover, most previous works focus on the security issues of FL in the presence of only honest-but-curious servers or only malicious clients. In this paper, we consider a stronger and more practical threat model in FL, where the honest-but-curious servers and malicious clients coexist, named as the non-fully trusted model. In the non-fully trusted FL, privacy protection schemes for honest-but-curious servers are executed to ensure that all model updates are indistinguishable, which makes malicious model updates difficult to detect. Toward this end, we present an Adaptive Privacy-Preserving FL (Ada-PPFL) scheme with Differential Privacy (DP) as the underlying technology, to simultaneously protect clients’ privacy and eliminate the adverse effects of malicious clients on model training. Specifically, we propose an adaptive DP strategy to achieve strong client-level privacy protection while minimizing the impact on the prediction accuracy of the global model. In addition, we introduce DPAD, an algorithm specifically designed to precisely detect malicious model updates, even in cases where the updates are protected by DP measures. Finally, the theoretical analysis and experimental results further illustrate that the proposed Ada-PPFL enables client-level privacy protection with 35% DP-noise savings, and maintains similar prediction accuracy to models without malicious attacks.
科研通智能强力驱动
Strongly Powered by AbleSci AI