计算机科学
实施
人气
推论
多样性(控制论)
计算机安全
联合学习
接口(物质)
妥协
信息隐私
分布式计算
人工智能
软件工程
心理学
社会心理学
社会科学
气泡
最大气泡压力法
社会学
并行计算
作者
Till Gehlhar,Felix Marx,Thomas Schneider,Ajith Suresh,Tobias Wehrle,Hossein Yalame
标识
DOI:10.1109/spw59333.2023.00012
摘要
Federated learning (FL) has gained widespread popularity in a variety of industries due to its ability to locally train models on devices while preserving privacy. However, FL systems are susceptible to i) privacy inference attacks and ii) poisoning attacks, which can compromise the system by corrupt actors. Despite a significant amount of work being done to tackle these attacks individually, the combination of these two attacks has received limited attention in the research community. To address this gap, we introduce SafeFL, a secure multiparty computation (MPC)-based framework designed to assess the efficacy of FL techniques in addressing both privacy inference and poisoning attacks. The heart of the SafeFL framework is a communicator interface that enables PyTorch-based implementations to utilize the well-established MP-SPDZ framework, which implements various MPC protocols. The goal of SafeFL is to facilitate the development of more efficient FL systems that can effectively address privacy inference and poisoning attacks.
科研通智能强力驱动
Strongly Powered by AbleSci AI