计算机科学
推论
深度学习
图形处理单元的通用计算
人工智能
计算机体系结构
并行计算
机器学习
计算机图形学(图像)
绘图
作者
Jianfeng Gu,Zhu Yi-chao,P. Wang,Mohak Chadha,Michael Gerndt
标识
DOI:10.1145/3605573.3605638
摘要
Serverless computing (FaaS) has been extensively utilized for deep learning (DL) inference due to the ease of deployment and payper-use benefits.However, existing FaaS platforms utilize GPUs in a coarse manner for DL inferences, without taking into account spatio-temporal resource multiplexing and isolation, which results in severe GPU under-utilization, high usage expenses, and SLO (Service Level Objectives) violation.There is an imperative need to enable an efficient and SLO-aware GPU-sharing mechanism in serverless computing to facilitate cost-effective DL inferences.In this paper, we propose FaST-GShare, an efficient FaaS-oriented Spatio-Temporal GPU Sharing architecture for deep learning inferences.In the architecture, we introduce the FaST-Manager to limit and isolate spatio-temporal resources for GPU multiplexing.In order to realize function performance, the automatic and flexible FaST-Profiler is proposed to profile function throughput under various resource allocations.Based on the profiling data and the isolation mechanism, we introduce the FaST-Scheduler with heuristic auto-scaling and efficient resource allocation to guarantee function SLOs.Meanwhile, FaST-Scheduler schedules function with efficient GPU node selection to maximize GPU usage.Furthermore, model sharing is exploited to mitigate memory contention.Our prototype implementation on the OpenFaaS platform and experiments on MLPerf-based benchmark prove that FaST-GShare can ensure resource isolation and function SLOs.Compared to the time sharing mechanism, FaST-GShare can improve throughput by 3.15x, GPU utilization by 1.34x, and SM (Streaming Multiprocessor) occupancy by 3.13x on average.
科研通智能强力驱动
Strongly Powered by AbleSci AI