稳健性(进化)
计算机科学
记忆
加密
计算机安全
同态加密
背景(考古学)
密码学
理论计算机科学
人工智能
数学
古生物学
生物化学
化学
数学教育
生物
基因
作者
Hidde Lycklama,Lukas Burkhalter,Alexander Viand,Nicolas Küchler,Anwar Hithnawi
标识
DOI:10.1109/sp46215.2023.10179400
摘要
Even though recent years have seen many attacks exposing severe vulnerabilities in Federated Learning (FL), a holistic understanding of what enables these attacks and how they can be mitigated effectively is still lacking. In this work, we demystify the inner workings of existing (targeted) attacks. We provide new insights into why these attacks are possible and why a definitive solution to FL robustness is challenging. We show that the need for ML algorithms to memorize tail data has significant implications for FL integrity. This phenomenon has largely been studied in the context of privacy; our analysis sheds light on its implications for ML integrity. We show that certain classes of severe attacks can be mitigated effectively by enforcing constraints such as norm bounds on clients’ updates. We investigate how to efficiently incorporate these constraints into secure FL protocols in the single-server setting. Based on this, we propose RoFL, a new secure FL system that extends secure aggregation with privacy-preserving input validation. Specifically, RoFL can enforce constraints such as L 2 and L ∞ bounds on high-dimensional encrypted model updates.
科研通智能强力驱动
Strongly Powered by AbleSci AI