量化(信号处理)
计算机科学
边缘设备
通信系统
矢量量化
架空(工程)
数据压缩
计算机工程
算法
人工智能
实时计算
计算机网络
云计算
操作系统
作者
Zirui Lian,Jing Cao,Yanru Zuo,Weihong Liu,Zongwei Zhu
标识
DOI:10.1109/iccd53106.2021.00089
摘要
With the widespread use of artificial intelligent (AI) applications and dramatic growth in data volumes from edge devices, there are currently many works that place the training of AI models onto edge devices. The state-of-the-art edge training framework, federated learning (FL), requires to transfer of a large amount of data between edge devices and the central server, which causes heavy communication overhead. To alleviate the communication overhead, gradient compression techniques are widely used. However, the bandwidth of the edge devices is usually different, causing communication heterogeneity. Existing gradient compression techniques usually adopt a fixed compression rate and do not take the straggler problem caused by the communication heterogeneity into account. To address these issues, we propose AGQFL, an automatic gradient quantization method consisting of three modules: quantization indicator module, quantization strategy module and quantization optimizer module. The quantization indicator module automatically determines the adjustment direction of quantization precision by measuring the convergence ability of the current model. Following the indicator and the physical bandwidth of each node, the quantization strategy module adjusts the quantization precision at run-time. Furthermore, the quantization optimizer module designs a new optimizer to reduce the training bias and eliminate the instability during the training process. Experimental results show that AGQFL can greatly speed up the training process in edge AI systems while maintaining or even improving model accuracy.
科研通智能强力驱动
Strongly Powered by AbleSci AI