Serverless computing (FaaS) has been extensively utilized for deep learning (DL) inference due to the ease of deployment and payper-use benefits.However, existing FaaS platforms utilize GPUs in a coarse manner for DL inferences, without taking into account spatio-temporal resource multiplexing and isolation, which results in severe GPU under-utilization, high usage expenses, and SLO (Service Level Objectives) violation.There is an imperative need to enable an efficient and SLO-aware GPU-sharing mechanism in serverless computing to facilitate cost-effective DL inferences.In this paper, we propose FaST-GShare, an efficient FaaS-oriented Spatio-Temporal GPU Sharing architecture for deep learning inferences.In the architecture, we introduce the FaST-Manager to limit and isolate spatio-temporal resources for GPU multiplexing.In order to realize function performance, the automatic and flexible FaST-Profiler is proposed to profile function throughput under various resource allocations.Based on the profiling data and the isolation mechanism, we introduce the FaST-Scheduler with heuristic auto-scaling and efficient resource allocation to guarantee function SLOs.Meanwhile, FaST-Scheduler schedules function with efficient GPU node selection to maximize GPU usage.Furthermore, model sharing is exploited to mitigate memory contention.Our prototype implementation on the OpenFaaS platform and experiments on MLPerf-based benchmark prove that FaST-GShare can ensure resource isolation and function SLOs.Compared to the time sharing mechanism, FaST-GShare can improve throughput by 3.15x, GPU utilization by 1.34x, and SM (Streaming Multiprocessor) occupancy by 3.13x on average.