计算机科学
异步通信
数据聚合器
联合学习
边缘设备
分布式计算
对称多处理机系统
领域(数学分析)
数据科学
云计算
计算机网络
无线传感器网络
数学
操作系统
数学分析
作者
Chao−Nan Xu,Youyang Qu,Yong Xiang,Longxiang Gao
标识
DOI:10.1016/j.cosrev.2023.100595
摘要
Federated learning (FL) is a kind of distributed machine learning framework, where the global model is generated on the centralized aggregation server based on the parameters of local models, addressing concerns about privacy leakage caused by the collection of local training data. With the growing computational and communication capacities of edge and IoT devices, applying FL on heterogeneous devices to train machine learning models is becoming a prevailing trend. Nonetheless, the synchronous aggregation strategy in the classic FL paradigm, particularly on heterogeneous devices, encounters limitations in resource utilization due to the need to wait for slow devices before aggregation in each training round. Furthermore, the uneven distribution of data across devices (i.e. data heterogeneity) in real-world scenarios adversely impacts the accuracy of the global model. Consequently, many asynchronous FL (AFL) approaches have been introduced across various application contexts to enhance efficiency, performance, privacy, and security. This survey comprehensively analyzes and summarizes existing AFL variations using a novel classification scheme, including device heterogeneity, data heterogeneity, privacy, and security on heterogeneous devices, as well as applications on heterogeneous devices. Finally, this survey reveals rising challenges and presents potentially promising research directions in this under-investigated domain.
科研通智能强力驱动
Strongly Powered by AbleSci AI