后门
计算机科学
背景(考古学)
领域
计算机安全
编配
深度学习
数据科学
人工智能
互联网隐私
法学
艺术
古生物学
音乐剧
政治学
视觉艺术
生物
作者
Thuy Dung Nguyen,Tuan Nguyen,Phi Le Nguyen,Hieu H. Pham,Khoa D. Doan,Kok‐Seng Wong
标识
DOI:10.1016/j.engappai.2023.107166
摘要
Federated learning (FL) is an approach within the realm of machine learning (ML) that allows the use of distributed data without compromising personal privacy. In FL, it becomes evident that the training data among participants frequently exhibit heterogeneous distribution characteristics. This inherent heterogeneity poses a substantial challenge for the orchestration server as it strives to assess the reliability of each local model update. Due to this challenge, FL becomes susceptible to various potential risks, with the ominous backdoor attack standing out as one of the most menacing threats. Backdoor attacks involve the insertion of malicious functionality into a targeted model through poisoned updates from malicious clients. These attacks can cause the global model to misbehave on specific inputs while appearing normal in other instances. Although the backdoor attacks received significant attention for their potential impact on practical deep learning applications, their exploration within the realm of FL remains limited. This survey seeks to address this gap by offering an all-encompassing examination of prevailing backdoor attack tactics and defenses in the context of FL. We include an exhaustive analysis of diverse approaches to provide a comprehensive understanding of this intricate landscape. Furthermore, we also discuss the challenges and potential future directions for attacks and defenses in the context of FL.
科研通智能强力驱动
Strongly Powered by AbleSci AI