推论
计算机科学
财产(哲学)
判别式
对手
计算机安全
机器学习
人工智能
哲学
认识论
作者
Zhibo Wang,Yuting Huang,Mengkai Song,Libing Wu,Xue Feng,Kui Ren
出处
期刊:IEEE Transactions on Dependable and Secure Computing
[Institute of Electrical and Electronics Engineers]
日期:2023-07-01
卷期号:20 (4): 3328-3340
被引量:20
标识
DOI:10.1109/tdsc.2022.3196646
摘要
Federated learning (FL) has emerged as an ideal privacy-preserving learning technique which can train a global model in a collaborative way while preserving the private data in the local. However, recent advances have demonstrated that FL is still vulnerable to inference attacks, such as reconstruction attack and membership inference. Among these attacks, the property inference attack, aiming to infer properties of the training data that are irrelevant with the learning objective, has not received too much attention while resulting in severe privacy leakage. Existing property inference attack approaches either cannot achieve satisfactory performance when the global model has converged or under dynamic FL where participants can drop in and drop out freely. In this paper, we propose a novel poisoning-assisted property inference attack (PAPI-attack) against FL. The key insight is that there exists underlying discriminative ability in the periodic model updates, which reflects the change of the data distribution, especially the occurrence of the sensitive property. Thus, a binary attack model can be constructed by a malicious participant for inferring the unintended information. More importantly, we present a property-specific poisoning mechanism by modifying the label of training data from the adversary to distort the decision boundary of shared (global) model in FL. Consequently, benign participants are induced to disclose more information about the sensitive property. Extensive experiments on real-world datasets demonstrate that PAPI-attack outperforms the state-of-the-art property inference attacks against FL.
科研通智能强力驱动
Strongly Powered by AbleSci AI