计算机科学
异步通信
分布式计算
无线网络
异步学习
调度(生产过程)
传输(电信)
无线
计算机网络
共享资源
数学优化
合作学习
电信
同步学习
教学方法
政治学
法学
数学
作者
Haihui Xie,Minghua Xia,Peiran Wu,Shuai Wang,Kaibin Huang
出处
期刊:Cornell University - arXiv
日期:2024-01-01
标识
DOI:10.48550/arxiv.2401.07122
摘要
Federated learning (FL) enables wireless terminals to collaboratively learn a shared parameter model while keeping all the training data on devices per se. Parameter sharing consists of synchronous and asynchronous ways: the former transmits parameters as blocks or frames and waits until all transmissions finish, whereas the latter provides messages about the status of pending and failed parameter transmission requests. Whatever synchronous or asynchronous parameter sharing is applied, the learning model shall adapt to distinct network architectures as an improper learning model will deteriorate learning performance and, even worse, lead to model divergence for the asynchronous transmission in resource-limited large-scale Internet-of-Things (IoT) networks. This paper proposes a decentralized learning model and develops an asynchronous parameter-sharing algorithm for resource-limited distributed IoT networks. This decentralized learning model approaches a convex function as the number of nodes increases, and its learning process converges to a global stationary point with a higher probability than the centralized FL model. Moreover, by jointly accounting for the convergence bound of federated learning and the transmission delay of wireless communications, we develop a node scheduling and bandwidth allocation algorithm to minimize the transmission delay. Extensive simulation results corroborate the effectiveness of the distributed algorithm in terms of fast learning model convergence and low transmission delay.
科研通智能强力驱动
Strongly Powered by AbleSci AI