ASHL: An Adaptive Multi-Stage Distributed Deep Learning Training Scheme for Heterogeneous Environments

计算机科学 异步通信 分布式计算 培训(气象学) 过程(计算) 方案(数学) 计算 加速 人工智能 机器学习 计算机网络 并行计算 算法 数学分析 物理 数学 气象学 操作系统
作者
Zhaoyan Shen,Qingxiang Tang,Tianren Zhou,Yuhao Zhang,Zhiping Jia,Dongxiao Yu,Zhiyong Zhang,Bingzhe Li
出处
期刊:IEEE Transactions on Computers [Institute of Electrical and Electronics Engineers]
卷期号:73 (1): 30-43 被引量:1
标识
DOI:10.1109/tc.2023.3315847
摘要

With the increment of data sets and models sizes, distributed deep learning has been proposed to accelerate training and improve the accuracy of DNN models. The parameter server framework is a popular collaborative architecture for data-parallel training, which works well for homogeneous environments by properly aggregating the computation/communication capabilities of different workers. However, in heterogeneous environments, the resources of different workers vary a lot. Some stragglers may seriously limit the whole speed, which impacts the overall training process. In this paper, we propose an adaptive multi-stage distributed deep learning training framework, named ASHL, for heterogeneous environments. First, a profiling scheme is proposed to capture the capabilities of each worker to reasonably plan the training and communication tasks on each worker, and lay the foundation for the formal training. Second, a hybrid-mode training scheme (i.e., coarse-grained and fined-grained training) is proposed to balance the model accuracy and training speed. The coarse-grained training scheme (named AHL) adopts an asynchronous communication strategy, which involves less frequent communications. Its main goal is to make the model quickly converge to a certain level. The fine-grained training stage (named SHL) uses a semi-asynchronous communication strategy and adopts a high communication frequency. Its main goal is to improve the model convergence effect. Finally, a compression-based communication scheme is proposed to further increase the communication efficiency of the training process. Our experimental results show that ASHL reduces the overall training time by more than 35% to converge to the same degree and has better generalization ability compared with state-of-the-art schemes like ADSP.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
董咚咚完成签到,获得积分10
2秒前
洋芋片完成签到 ,获得积分10
2秒前
二尖瓣后叶完成签到,获得积分10
3秒前
zc完成签到,获得积分10
3秒前
酷波er应助dildil采纳,获得10
3秒前
科研通AI5应助科研小民工采纳,获得10
4秒前
觅桃乌龙发布了新的文献求助10
4秒前
张有志完成签到,获得积分10
4秒前
JoyceeZHONG完成签到,获得积分10
4秒前
Shine完成签到 ,获得积分10
4秒前
5秒前
King16发布了新的文献求助10
6秒前
哲000完成签到,获得积分10
6秒前
Tutusamo发布了新的文献求助10
6秒前
Ning完成签到,获得积分10
7秒前
科研通AI5应助欢欢采纳,获得10
7秒前
xiaozou55完成签到 ,获得积分10
7秒前
8秒前
浩浩浩完成签到,获得积分10
9秒前
9秒前
10秒前
科研通AI5应助MrCoolWu采纳,获得10
10秒前
ZXD1989完成签到 ,获得积分10
10秒前
大王叫我来巡山完成签到,获得积分20
10秒前
弩弩hannah完成签到,获得积分10
10秒前
庸尘完成签到,获得积分10
11秒前
AXEDW完成签到,获得积分10
11秒前
无花果应助gavincsu采纳,获得10
11秒前
李健应助TT采纳,获得10
12秒前
善学以致用应助韭黄采纳,获得10
12秒前
刘一安完成签到 ,获得积分10
12秒前
我的miemie完成签到,获得积分10
12秒前
最最最完成签到,获得积分20
12秒前
清爽雪枫完成签到,获得积分10
12秒前
本杰明发布了新的文献求助30
12秒前
杳鸢应助欢呼的棒棒糖采纳,获得10
13秒前
13秒前
13秒前
13秒前
YHX9910完成签到,获得积分10
13秒前
高分求助中
Continuum Thermodynamics and Material Modelling 3000
Production Logging: Theoretical and Interpretive Elements 2700
Social media impact on athlete mental health: #RealityCheck 1020
Ensartinib (Ensacove) for Non-Small Cell Lung Cancer 1000
Unseen Mendieta: The Unpublished Works of Ana Mendieta 1000
Bacterial collagenases and their clinical applications 800
El viaje de una vida: Memorias de María Lecea 800
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 有机化学 生物化学 物理 纳米技术 计算机科学 内科学 化学工程 复合材料 基因 遗传学 物理化学 催化作用 量子力学 光电子学 冶金
热门帖子
关注 科研通微信公众号,转发送积分 3527699
求助须知:如何正确求助?哪些是违规求助? 3107752
关于积分的说明 9286499
捐赠科研通 2805513
什么是DOI,文献DOI怎么找? 1539954
邀请新用户注册赠送积分活动 716878
科研通“疑难数据库(出版商)”最低求助积分说明 709759