计算机科学
继电器
同步(交流)
分布式计算
GSM演进的增强数据速率
传输(电信)
趋同(经济学)
数据同步
计算机网络
人工智能
无线传感器网络
电信
功率(物理)
频道(广播)
物理
经济
量子力学
经济增长
作者
Zhihao Qu,Song Guo,Haozhao Wang,Baoliu Ye,Yi Wang,Albert Y. Zomaya,Bin Tang
标识
DOI:10.1109/tmc.2021.3083154
摘要
Federated Learning (FL) is a promising machine learning paradigm to cooperatively train a global model with highly distributed data located on mobile devices. Aiming to optimize the communication efficiency for gradient aggregation and model synchronization among large-scale devices, we propose a relay-assisted FL framework. By breaking the traditional transmission-order constraint and exploiting the broadcast characteristic of relay nodes, we design a novel synchronization scheme named Partial Synchronization Parallel (PSP), in which models and gradients are transmitted simultaneously and aggregated at relay nodes, resulting in traffic reduction. We prove that PSP has the same convergence rate as the sequential synchronization approaches via rigorous analysis. To further accelerate the training process, we integrate PSP with any unbiased and error-bounded compression technologies and prove that the convergence properties of the resulting scheme still hold. Extensive experiments are conducted in a distributed cluster environment with real-world datasets and the results demonstrate that our proposed approach reduces the training time up to 37 percent compared to state-of-the-art methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI