计算机科学
软件部署
可信赖性
稳健性(进化)
保密
信息隐私
对抗制
计算机安全
数据科学
设计隐私
知识管理
互联网隐私
人工智能
软件工程
生物化学
化学
基因
作者
Yifei Zhang,Dun Zeng,Jinglong Luo,Zenglin Xu,Irwin King
标识
DOI:10.1145/3543873.3587681
摘要
Trustworthy artificial intelligence (AI) technology has revolutionized daily life and greatly benefited human society. Among various AI technologies, Federated Learning (FL) stands out as a promising solution for diverse real-world scenarios, ranging from risk evaluation systems in finance to cutting-edge technologies like drug discovery in life sciences. However, challenges around data isolation and privacy threaten the trustworthiness of FL systems. Adversarial attacks against data privacy, learning algorithm stability, and system confidentiality are particularly concerning in the context of distributed training in federated learning. Therefore, it is crucial to develop FL in a trustworthy manner, with a focus on robustness and privacy. In this survey, we propose a comprehensive roadmap for developing trustworthy FL systems and summarize existing efforts from two key aspects: robustness and privacy. We outline the threats that pose vulnerabilities to trustworthy federated learning across different stages of development, including data processing, model training, and deployment. To guide the selection of the most appropriate defense methods, we discuss specific technical solutions for realizing each aspect of Trustworthy FL (TFL). Our approach differs from previous work that primarily discusses TFL from a legal perspective or presents FL from a high-level, non-technical viewpoint.
科研通智能强力驱动
Strongly Powered by AbleSci AI