PS+: A Simple yet Effective Framework for Fast Training on Parameter Server

计算机科学 简单(哲学) 计算 人工智能 超参数 仿形(计算机编程) 理论计算机科学 算法 程序设计语言 认识论 哲学
作者
A-Long Jin,Wenchao Xu,Song Guo,Bing Hu,Kwan L. Yeung
出处
期刊:IEEE Transactions on Parallel and Distributed Systems [Institute of Electrical and Electronics Engineers]
卷期号:33 (12): 4625-4637
标识
DOI:10.1109/tpds.2022.3200518
摘要

In distributed training, workers collaboratively refine the global model parameters by pushing their updates to the Parameter Server and pulling fresher parameters for the next iteration. This introduces high communication costs for training at scale, and incurs unproductive waiting time for workers. To minimize the waiting time, existing approaches overlap communication and computation for deep neural networks. Yet, these techniques not only require the layer-by-layer model structures, but also need significant efforts in runtime profiling and hyperparameter tuning. To make the overlapping optimization simple and generic , in this article, we propose a new Parameter Server framework. Our solution decouples the dependency between push and pull operations, and allows workers to eagerly pull the global parameters. This way, both push and pull operations can be easily overlapped with computations. Besides, the overlapping manner offers a different way to address the straggler problem, where the stale updates greatly retard the training process. In the new framework, with adequate information available to workers, they can explicitly modulate the learning rates for their updates. Thus, the global parameters can be less compromised by stale updates. We implement a prototype system in PyTorch and demonstrate its effectiveness on both CPU/GPU clusters. Experimental results show that our prototype saves up to 54% less time for each iteration and up to 37% fewer iterations for model convergence, achieving up to 2.86× speedup over widely-used synchronization schemes.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
1秒前
科研通AI6应助zxxx采纳,获得30
1秒前
包容的绿蕊完成签到,获得积分10
1秒前
忧郁的玉米投手完成签到,获得积分10
1秒前
量子星尘发布了新的文献求助10
2秒前
可靠代丝发布了新的文献求助10
3秒前
梨花雨凉发布了新的文献求助10
3秒前
3秒前
4秒前
亓大大发布了新的文献求助10
4秒前
5秒前
阿飞发布了新的文献求助10
5秒前
yanting完成签到,获得积分10
6秒前
8秒前
8秒前
liang白开完成签到,获得积分10
8秒前
科研通AI6应助加菲猫采纳,获得10
9秒前
彭于晏应助猪猪hero采纳,获得10
10秒前
r41r32完成签到 ,获得积分10
10秒前
11秒前
Spike发布了新的文献求助10
12秒前
凌小满发布了新的文献求助60
13秒前
永字号发布了新的文献求助10
13秒前
雪白的真完成签到,获得积分20
14秒前
14秒前
风中无血发布了新的文献求助10
14秒前
刘子豪发布了新的文献求助10
14秒前
闪闪柔完成签到,获得积分10
17秒前
璐璐完成签到,获得积分10
17秒前
18秒前
18秒前
豆儿嘚小豆儿完成签到,获得积分10
19秒前
妮妮完成签到 ,获得积分10
19秒前
李园园完成签到 ,获得积分10
20秒前
赘婿应助认真手机采纳,获得10
20秒前
20秒前
wangxw发布了新的文献求助10
20秒前
量子星尘发布了新的文献求助10
21秒前
21秒前
冷艳的纸鹤完成签到,获得积分10
21秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
List of 1,091 Public Pension Profiles by Region 1001
On the application of advanced modeling tools to the SLB analysis in NuScale. Part I: TRACE/PARCS, TRACE/PANTHER and ATHLET/DYN3D 500
L-Arginine Encapsulated Mesoporous MCM-41 Nanoparticles: A Study on In Vitro Release as Well as Kinetics 500
Haematolymphoid Tumours (Part A and Part B, WHO Classification of Tumours, 5th Edition, Volume 11) 400
Virus-like particles empower RNAi for effective control of a Coleopteran pest 400
Unraveling the Causalities of Genetic Variations - Recent Advances in Cytogenetics 400
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 纳米技术 计算机科学 内科学 化学工程 复合材料 物理化学 基因 遗传学 催化作用 冶金 量子力学 光电子学
热门帖子
关注 科研通微信公众号,转发送积分 5465399
求助须知:如何正确求助?哪些是违规求助? 4569719
关于积分的说明 14320701
捐赠科研通 4496152
什么是DOI,文献DOI怎么找? 2463156
邀请新用户注册赠送积分活动 1452110
关于科研通互助平台的介绍 1427270