计算机科学
推论
集合(抽象数据类型)
特征向量
离散余弦变换
数据挖掘
特征(语言学)
差别隐私
机器学习
模型攻击
主动学习(机器学习)
人工智能
鉴定(生物学)
计算机安全
生物
图像(数学)
哲学
植物
程序设计语言
语言学
作者
Degang Wang,Yi Sun,Qi Gao,Fan Yang
标识
DOI:10.1109/iip57348.2022.00031
摘要
Federated learning provides privacy protection for source data by exchanging model parameters or gradients. However, it still faces the problem of privacy disclosure. For example, membership inference attack aims to identify whether target data sample is used to train machine learning models in federated learning. Active membership inference attack takes advantage of the feature that attackers can participate in model training in federated learning, actively influence the model update to extract more information about the training set, which greatly increases the risk of model privacy disclosure. Aiming at the problem that the existing secure aggregation methods of federated learning cannot resist the active membership inference attack, DeMiaAgg, an aggregation method based on cosine distance filtering, is proposed. The cosine distance is used to quantify the deviation degree between clients’ gradient vector and global model parameter vector, and the malicious gradient vector is excluded from gradients aggregation to defense against the active membership inference attack. Experiments on the Texas 100 and Location30 datasets show that DeMiaAgg method is superior to the current advanced differential privacy and secure aggregation methods, and can reduce the accuracy of active membership inference attack to the level of passive attacks.
科研通智能强力驱动
Strongly Powered by AbleSci AI