计算机科学
推论
正确性
机器学习
人工智能
模型攻击
公制(单位)
过程(计算)
数据挖掘
计算机安全
算法
工程类
运营管理
操作系统
作者
Xiaodong Wang,Naiyu Wang,Longfei Wu,Zhitao Guan,Xiaojiang Du,Mohsen Guizani
标识
DOI:10.1109/icc45041.2023.10279702
摘要
Membership inference attack (MIA) has been proved to pose a serious threat to federated learning (FL). However, most of the existing membership inference attacks against FL rely on the specific attack models built from the target model behaviors, which make the attacks costly and complicated. In addition, directly adopting the inference attacks that are originally designed for machine learning models into the federated scenarios can lead to poor performance. We propose GBMIA, an attack model-free membership inference method based on gradient. We take full advantage of the federated learning process by observing the target model's behaviors after gradient ascent tuning. And we combine prediction correctness and the gradient norm-based metric for membership inference. The proposed GBMIA can be conducted by both global and local attackers. We conduct experimental evaluations on three real-world datasets to demonstrate that GBMIA can achieve a high attack accuracy. We further apply the arbitration mechanism to increase the effectiveness of GBMIA which can lead to an attack accuracy close to 1 on all three datasets. We also conduct experiments to substantiate that clients going offline and the overlap of clients' training sets have great effect on the membership leakage in FL.
科研通智能强力驱动
Strongly Powered by AbleSci AI