亲爱的研友该休息了!由于当前在线用户较少,发布求助请尽量完整地填写文献信息,科研通机器人24小时在线,伴您度过漫漫科研夜!身体可是革命的本钱,早点休息,好梦!

MPMoE: Memory Efficient MoE for Pre-Trained Models With Adaptive Pipeline Parallelism

计算机科学 并行计算 管道(软件) 平行性(语法) 任务并行性 指令级并行 计算机体系结构 操作系统
作者
Zheng Zhang,Yaqi Xia,H. Wang,Donglin Yang,Chuang Hu,Xiaobo Zhou,Dazhao Cheng
出处
期刊:IEEE Transactions on Parallel and Distributed Systems [Institute of Electrical and Electronics Engineers]
卷期号:35 (6): 998-1011 被引量:4
标识
DOI:10.1109/tpds.2024.3385639
摘要

In recent years, the Mixture-of-Experts (MoE) technique has gained widespread popularity as a means to scale pretrained models to exceptionally large sizes. Dynamic activation of experts allows for conditional computation, increasing the number of parameters of neural networks, which is critical for absorbing the vast amounts of knowledge available in many deep learning areas. However, despite the existing system and algorithm optimizations, there are significant challenges to be tackled when it comes to the inefficiencies of communication and memory consumption. In this paper, we present the design and implementation of MPMoE, a high-performance library that accelerates MoE training with adaptive and memory-efficient pipeline parallelism. Inspired by that the MoE training procedure can be divided into multiple independent sub-stages. We design a pipeline parallelism method for reducing communication latency by overlapping with computation operations. Further, we analyze the memory footprint breakdown of MoE training and identify that activations and temporary buffers are the primary contributors to the overall memory footprint. Toward memory efficiency, we propose memory reuse strategies to reduce memory requirements by eliminating memory redundancies. Finally, to optimize pipeline granularity and memory reuse strategies jointly, we propose a profile-based algorithm and a performance model to determine the configurations of MPMoE at runtime. We implement MPMoE upon PyTorch and evaluate it with common MoE models in two physical clusters, including 64 NVIDIA A100 GPU cards and 16 NVIDIA V100 GPU cards. Compared with the state-of-art approach, MPMoE achieves up to 2.3× speedup while reducing more than 30% memory footprint for training large models.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
RedBig完成签到 ,获得积分10
7秒前
敏感小霸王完成签到 ,获得积分10
8秒前
12秒前
16秒前
瞬间完成签到,获得积分10
18秒前
ws发布了新的文献求助10
18秒前
FashionBoy应助小艾艾呢采纳,获得30
19秒前
pinecone发布了新的文献求助10
19秒前
21秒前
777发布了新的文献求助10
22秒前
领导范儿应助pinecone采纳,获得10
25秒前
27秒前
大模型应助学术混子采纳,获得10
31秒前
Huang发布了新的文献求助10
32秒前
CipherSage应助科研通管家采纳,获得10
41秒前
Ava应助科研通管家采纳,获得10
41秒前
科研通AI6.2应助旧残月采纳,获得10
42秒前
43秒前
萝卜完成签到,获得积分10
45秒前
美猪猪完成签到,获得积分10
46秒前
学术混子发布了新的文献求助10
47秒前
47秒前
48秒前
pinecone发布了新的文献求助10
52秒前
dkx完成签到 ,获得积分10
53秒前
美猪猪发布了新的文献求助10
58秒前
1分钟前
1分钟前
Lex发布了新的文献求助10
1分钟前
jumbaumba完成签到,获得积分10
1分钟前
1分钟前
xingsixs完成签到 ,获得积分10
1分钟前
漠然完成签到,获得积分10
1分钟前
1分钟前
ceeray23发布了新的文献求助20
1分钟前
安青兰完成签到 ,获得积分10
1分钟前
光合作用完成签到,获得积分10
1分钟前
务实书包完成签到,获得积分10
1分钟前
1分钟前
完美世界应助偏偏11采纳,获得10
1分钟前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Modern Epidemiology, Fourth Edition 5000
Handbook of pharmaceutical excipients, Ninth edition 5000
Digital Twins of Advanced Materials Processing 2000
Weaponeering, Fourth Edition – Two Volume SET 2000
Polymorphism and polytypism in crystals 1000
Signals, Systems, and Signal Processing 610
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 纳米技术 化学工程 生物化学 物理 计算机科学 内科学 复合材料 催化作用 物理化学 光电子学 电极 冶金 细胞生物学 基因
热门帖子
关注 科研通微信公众号,转发送积分 6020872
求助须知:如何正确求助?哪些是违规求助? 7624338
关于积分的说明 16165807
捐赠科研通 5168683
什么是DOI,文献DOI怎么找? 2766126
邀请新用户注册赠送积分活动 1748570
关于科研通互助平台的介绍 1636127