计算机科学
深度学习
水准点(测量)
随机梯度下降算法
卷积神经网络
人工智能
服务器
深层神经网络
卷积(计算机科学)
机器学习
加速
计算机体系结构
节点(物理)
计算机工程
人工神经网络
并行计算
分布式计算
操作系统
结构工程
工程类
地理
大地测量学
作者
Shaohuai Shi,Qiang Wang,Xiaowen Chu
标识
DOI:10.1109/dasc/picom/datacom/cyberscitec.2018.000-4
摘要
Deep learning frameworks have been widely deployed on GPU servers for deep learning applications in both academia and industry. In training deep neural networks (DNNs), there are many standard processes or algorithms, such as convolution and stochastic gradient descent (SGD), but the running performance of different frameworks might be different even running the same deep model on the same GPU hardware. In this study, we evaluate the running performance of four state-of-the-art distributed deep learning frameworks (i.e., Caffe-MPI, CNTK, MXNet, and TensorFlow) over single-GPU, multi-GPU, and multi-node environments. We first build performance models of standard processes in training DNNs with SGD, and then we benchmark the running performance of these frameworks with three popular convolutional neural networks (i.e., AlexNet, GoogleNet and ResNet-50), after that, we analyze what factors that result in the performance gap among these four frameworks. Through both analytical and experimental analysis, we identify bottlenecks and overheads which could be further optimized. The main contribution is that the proposed performance models and the analysis provide further optimization directions in both algorithmic design and system configuration.
科研通智能强力驱动
Strongly Powered by AbleSci AI