计算机科学
可扩展性
个性化
计算
趋同(经济学)
分布式计算
选择(遗传算法)
联合学习
人工智能
机器学习
数据库
万维网
算法
经济增长
经济
作者
Allan M. de Souza,Filipe Maciel,Joahannes B. D. da Costa,Luiz F. Bittencourt,Eduardo Cerqueira,Antônio A. F. Loureiro,Leandro A. Villas
出处
期刊:Ad hoc networks
[Elsevier]
日期:2024-02-29
卷期号:157: 103462-103462
被引量:7
标识
DOI:10.1016/j.adhoc.2024.103462
摘要
Federated Learning (FL) is a distributed approach to collaboratively training machine learning models. FL requires a high level of communication between the devices and a central server, thus imposing several challenges, including communication bottlenecks and network scalability. This article introduces ACSP-FL, a solution to reduce the overall communication and computation costs for training a model in FL environments. ACSP-FL employs a client selection strategy that dynamically adapts the number of devices training the model and the number of rounds required to achieve convergence. Moreover, ACSP-FL enables model personalization to improve clients performance. A use case based on human activity recognition datasets aims to show the impact and benefits of ACSP-FL when compared to state-of-the-art approaches. Experimental evaluations show that ACSP-FL minimizes the overall communication and computation overheads to train a model and converges the system efficiently. In particular, ACSP-FL reduces communication up to 95% compared to literature approaches while providing good convergence even in scenarios where data is distributed differently, non-independent and identical way between client devices.
科研通智能强力驱动
Strongly Powered by AbleSci AI