清晨好,您是今天最早来到科研通的研友!由于当前在线用户较少,发布求助请尽量完整地填写文献信息,科研通机器人24小时在线,伴您科研之路漫漫前行!

MPMoE: Memory Efficient MoE for Pre-Trained Models With Adaptive Pipeline Parallelism

计算机科学 并行计算 管道(软件) 平行性(语法) 任务并行性 指令级并行 计算机体系结构 操作系统
作者
Zheng Zhang,Yaqi Xia,H. Wang,Donglin Yang,Chuang Hu,Xiaobo Zhou,Dazhao Cheng
出处
期刊:IEEE Transactions on Parallel and Distributed Systems [Institute of Electrical and Electronics Engineers]
卷期号:35 (6): 998-1011 被引量:4
标识
DOI:10.1109/tpds.2024.3385639
摘要

In recent years, the Mixture-of-Experts (MoE) technique has gained widespread popularity as a means to scale pretrained models to exceptionally large sizes. Dynamic activation of experts allows for conditional computation, increasing the number of parameters of neural networks, which is critical for absorbing the vast amounts of knowledge available in many deep learning areas. However, despite the existing system and algorithm optimizations, there are significant challenges to be tackled when it comes to the inefficiencies of communication and memory consumption. In this paper, we present the design and implementation of MPMoE, a high-performance library that accelerates MoE training with adaptive and memory-efficient pipeline parallelism. Inspired by that the MoE training procedure can be divided into multiple independent sub-stages. We design a pipeline parallelism method for reducing communication latency by overlapping with computation operations. Further, we analyze the memory footprint breakdown of MoE training and identify that activations and temporary buffers are the primary contributors to the overall memory footprint. Toward memory efficiency, we propose memory reuse strategies to reduce memory requirements by eliminating memory redundancies. Finally, to optimize pipeline granularity and memory reuse strategies jointly, we propose a profile-based algorithm and a performance model to determine the configurations of MPMoE at runtime. We implement MPMoE upon PyTorch and evaluate it with common MoE models in two physical clusters, including 64 NVIDIA A100 GPU cards and 16 NVIDIA V100 GPU cards. Compared with the state-of-art approach, MPMoE achieves up to 2.3× speedup while reducing more than 30% memory footprint for training large models.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
Wang完成签到 ,获得积分20
22秒前
完美世界应助beibeihola采纳,获得10
23秒前
24秒前
Nene完成签到 ,获得积分10
27秒前
gszy1975完成签到,获得积分10
29秒前
44秒前
48秒前
beibeihola发布了新的文献求助10
52秒前
physicalpicture完成签到,获得积分10
1分钟前
科研通AI6.1应助beibeihola采纳,获得10
1分钟前
1分钟前
1分钟前
顷梦发布了新的文献求助10
1分钟前
华仔应助天真千易采纳,获得10
1分钟前
1分钟前
Ava应助天真千易采纳,获得10
1分钟前
传奇3应助天真千易采纳,获得10
1分钟前
大模型应助天真千易采纳,获得10
1分钟前
乐乐应助天真千易采纳,获得30
1分钟前
JamesPei应助天真千易采纳,获得10
1分钟前
Ava应助天真千易采纳,获得10
1分钟前
善学以致用应助天真千易采纳,获得10
1分钟前
所所应助天真千易采纳,获得10
1分钟前
Owen应助天真千易采纳,获得20
1分钟前
1分钟前
加油发布了新的文献求助10
1分钟前
1分钟前
1分钟前
1分钟前
1分钟前
1分钟前
2分钟前
2分钟前
2分钟前
2分钟前
2分钟前
天真千易发布了新的文献求助10
2分钟前
天真千易发布了新的文献求助10
2分钟前
天真千易发布了新的文献求助20
2分钟前
天真千易发布了新的文献求助10
2分钟前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Modern Epidemiology, Fourth Edition 5000
Handbook of pharmaceutical excipients, Ninth edition 5000
Digital Twins of Advanced Materials Processing 2000
Weaponeering, Fourth Edition – Two Volume SET 2000
Polymorphism and polytypism in crystals 1000
Social Cognition: Understanding People and Events 800
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 纳米技术 有机化学 物理 生物化学 化学工程 计算机科学 复合材料 内科学 催化作用 光电子学 物理化学 电极 冶金 遗传学 细胞生物学
热门帖子
关注 科研通微信公众号,转发送积分 6028132
求助须知:如何正确求助?哪些是违规求助? 7685796
关于积分的说明 16186162
捐赠科研通 5175363
什么是DOI,文献DOI怎么找? 2769429
邀请新用户注册赠送积分活动 1752887
关于科研通互助平台的介绍 1638705