计算机科学
软件部署
背景(考古学)
架空(工程)
原始数据
数据科学
领域(数学)
人工智能
开放式研究
机器学习
万维网
软件工程
古生物学
程序设计语言
纯数学
操作系统
生物
数学
作者
Sawsan AbdulRahman,Hanine Tout,Hakima Ould‐Slimane,Azzam Mourad,Chamseddine Talhi,Mohsen Guizani
标识
DOI:10.1109/jiot.2020.3030072
摘要
Driven by privacy concerns and the visions of deep learning, the last four years have witnessed a paradigm shift in the applicability mechanism of machine learning (ML). An emerging model, called federated learning (FL), is rising above both centralized systems and on-site analysis, to be a new fashioned design for ML implementation. It is a privacy-preserving decentralized approach, which keeps raw data on devices and involves local ML training while eliminating data communication overhead. A federation of the learned and shared models is then performed on a central server to aggregate and share the built knowledge among participants. This article starts by examining and comparing different ML-based deployment architectures, followed by in-depth and in-breadth investigation on FL. Compared to the existing reviews in the field, we provide in this survey a new classification of FL topics and research fields based on thorough analysis of the main technical challenges and current related work. In this context, we elaborate comprehensive taxonomies covering various challenging aspects, contributions, and trends in the literature, including core system models and designs, application areas, privacy and security, and resource management. Furthermore, we discuss important challenges and open research directions toward more robust FL systems.
科研通智能强力驱动
Strongly Powered by AbleSci AI