稳健性(进化)
计算机科学
计算机安全
多学科方法
信息隐私
联合学习
设计隐私
互联网隐私
数据科学
人工智能
政治学
生物化学
基因
化学
法学
作者
Lingjuan Lyu,Han Yu,Xingjun Ma,Chen Chen,Lichao Sun,Jun Zhao,Qiang Yang,Philip S. Yu
出处
期刊:IEEE transactions on neural networks and learning systems
[Institute of Electrical and Electronics Engineers]
日期:2024-01-01
卷期号:: 1-21
被引量:108
标识
DOI:10.1109/tnnls.2022.3216981
摘要
As data are increasingly being stored in different silos and societies becoming more aware of data privacy issues, the traditional centralized training of artificial intelligence (AI) models is facing efficiency and privacy challenges. Recently, federated learning (FL) has emerged as an alternative solution and continues to thrive in this new reality. Existing FL protocol designs have been shown to be vulnerable to adversaries within or outside of the system, compromising data privacy and system robustness. Besides training powerful global models, it is of paramount importance to design FL systems that have privacy guarantees and are resistant to different types of adversaries. In this article, we conduct a comprehensive survey on privacy and robustness in FL over the past five years. Through a concise introduction to the concept of FL and a unique taxonomy covering: 1) threat models; 2) privacy attacks and defenses; and 3) poisoning attacks and defenses, we provide an accessible review of this important topic. We highlight the intuitions, key techniques, and fundamental assumptions adopted by various attacks and defenses. Finally, we discuss promising future research directions toward robust and privacy-preserving FL, and their interplays with the multidisciplinary goals of FL.
科研通智能强力驱动
Strongly Powered by AbleSci AI