计算机科学
吞吐量
深层神经网络
推论
调度(生产过程)
库达
人工神经网络
人工智能
并行计算
操作系统
无线
运营管理
经济
作者
Vyacheslav Zhdanovskiy,Lev Teplyakov,Philipp Belyaev
摘要
In the recent years, there has been a significant growth of interest in real-world systems based on deep neural networks (DNNs). These systems typically incorporate multiple DNNs running simultaneously. In this paper we propose a novel approach of multi-DNN execution on a single GPU using multiple CUDA contexts and TensorRT, state-of-the-art DNN inference framework. We show that it can lead to more efficient scheduling of multiple DNNs, especially in case when a lightweight and a heavy DNNs are inferred together. We show that our approach can provide an almost 7x increase in the throughput of a lightweight DNN at the cost of neglible throughput drop of a heavy DNN, compared to the baseline. Moreover, we compare two ways of improving throughput of a single DNN by processing multiple images together: standard batching and implicit batching by processing multiple images simultaneously using several TensorRT execution contexts. We show that meanwhile standard batching outperforms implicit batching at larger batch sizes, implicit batching can provide up to 43% more throughput for a smaller DNN using smaller batch size.
科研通智能强力驱动
Strongly Powered by AbleSci AI