内存占用
培训(气象学)
计算机科学
变压器
足迹
可靠性工程
工程类
操作系统
电气工程
地质学
地理
电压
古生物学
气象学
作者
Zhengxian Lu,Fangyu Wang,Zhiwei Xu,Fei Yang,Tao Li
摘要
ABSTRACT Background: Transformer models have emerged as potent solutions to a wide array of multidisciplinary challenges. The deployment of transformer architectures is significantly hindered by their extensive computational and memory requirements, necessitating reliance on advanced efficient distributed training methodologies. Motivation: Prior research has delved into the performance bottlenecks associated with distributed training, aiming to unravel these bottlenecks and suggest optimization directions. However, such analyses often overlook three aspects unique to transformer models: the specialized architecture, the dependency on various distributed strategies, and the requirement to balance computational and memory overhead. Method: This paper aims to bridge this gap by offering a comprehensive examination of the performance bottlenecks inherent in the distributed training of transformer models, leveraging both theoretical analysis and empirical investigation. We propose an analytical framework tailored to these unique aspects of transformers, facilitating a holistic evaluation of model architectures, distributed strategies, and resource consumption. Based on this analytical framework, we conduct a comparative analysis of theoretical performances and further systematically explore how various distributed training strategies fare in real‐world scenarios. Results: Most of the experimental results can be well explained by the analytical outcomes derived from the analytical framework. Notably, our findings suggest an advantage of pipeline parallelism over data parallelism for transformer models. Moreover, we shed light on some unexpected outcomes, such as the potential for increased total memory overhead due to suboptimal model partitioning within pipeline parallelism. Additionally, we underscore the significance of communication block size and waiting time to further enhance performance.
科研通智能强力驱动
Strongly Powered by AbleSci AI