计算机科学
利用
稳健性(进化)
聚类分析
联合学习
对手
分布式计算
计算机安全
信息隐私
水准点(测量)
计算机网络
数据挖掘
人工智能
生物化学
基因
大地测量学
化学
地理
作者
Xiao Chen,Haining Yu,Xiaohua Jia,Xiangzhan Yu
标识
DOI:10.1109/tifs.2023.3315125
摘要
Federated learning (FL) is an emerging paradigm of privacy-preserving distributed machine learning that effectively deals with the privacy leakage problem by utilizing cryptographic primitives. However, how to prevent poisoning attacks in distributed situations has recently become a major FL concern. Indeed, an adversary can manipulate multiple edge nodes and submit malicious gradients to disturb the global model's availability. Currently, most existing works rely on an Independently Identical Distribution (IID) situation and identify malicious gradients using plaintext. However, we demonstrates that current works cannot handle the data heterogeneity scenario challenges and that publishing unencrypted gradients imposes significant privacy leakage problems. Therefore, we develop APFed, a layered privacy-preserving defense mechanism that significantly mitigates the effects of poisoning attacks in data heterogeneity scenarios. Specifically, we exploit HE as the underlying technique and employ the median coordinate as the benchmark. Subsequently, we propose a secure cosine similarity scheme to identify poisonous gradients, and we innovatively use clustering as part of the defense mechanism and develop a hierarchical aggregation that enhances our scheme's robustness in IID and non-IID scenarios. Extensive evaluations on two benchmark datasets demonstrate that APFed outperforms existing defense strategies while reducing the communication overhead by replacing the expensive remote communication method with inexpensive intra-cluster communication.
科研通智能强力驱动
Strongly Powered by AbleSci AI