计算机科学
可扩展性
瓶颈
过程(计算)
数据压缩
分布式计算
管道(软件)
反向传播
深度学习
实时计算
通信系统
人工智能
人工神经网络
嵌入式系统
计算机网络
数据库
操作系统
程序设计语言
作者
Juncai Liu,Jessie Hui Wang,Chenghao Rong,Jilong Wang
标识
DOI:10.1109/icc45855.2022.9839126
摘要
Distributed learning is widely used to accelerate the training of deep learning models, but it is known that communication efficiency limits the scalability of distributed learning systems. Current gradient compression techniques provide promising methods to reduce communication time, but the extra time incurred by compression is not negligible. After compression techniques are applied, the communication time is significantly reduced because the data size needed to communicate becomes much smaller, but compressing gradients is time-consuming and it becomes a new bottleneck. In this paper, we design and implement PipeCompress, a system to decouple compression and backpropagation operations into two processes and pipeline the two processes to hide compression time. We also propose a specialized inter-process communication mechanism based on the characteristics of DNN distributed training to improve the efficiency of passing messages between the two processes, which makes sure that the decoupling does not bring much extra inter-process communication time cost. As far as we know, this is the first work that notices the overhead of compression and pipelines backpropagation and compression operations to hide compression time in distributed learning. Experiments show that PipeCompress can significantly hide compression time, reduce iteration time, and accelerate the training process on various DNN models.
科研通智能强力驱动
Strongly Powered by AbleSci AI