计算机科学
云计算
推论
边缘计算
帕斯卡(单位)
工作站
人工神经网络
GSM演进的增强数据速率
计算机工程
软件部署
数据挖掘
人工智能
操作系统
程序设计语言
作者
Feiyu Zhao,Sheng Wang,Ping Lin,Yongming Chen
标识
DOI:10.1016/j.eswa.2023.120475
摘要
In this paper, nearly 40 commonly used deep neural network(DNN) models are selected, and their cross-platform and cross-inference frameworks are deeply analysed. The main metrics of accuracy, the total number of model parameters, the computational complexity, the accuracy density, the inference time, the memory consumption and other related parameters are used to measure their performance. The heterogeneous computing experiment is implemented on both the Google Colab cloud computing platform and the Jetson Nano embedded edge computing platform. The obtained performance is compared with that of two previous computing platforms: a workstation equipped with an NVIDIA Titan X Pascal and an embedded system based on an NVIDIA Jetson TX1 board. In addition, on the Jetson Nano embedded edge computing platform, different inference frameworks are investigated to evaluate the inference efficiency of the DNN models. Regression models are established to characterize the variation in the computing performance of different DNN classification algorithms so that the inference results of unknown models can be estimated. ANOVA methods are proposed to quantify the differences between models. The experimental results have important guiding significance for the better selection, deployment and application of DNN models in practice. Codes are available at this https URL https://github.com/Foreverzfy/Model-Test.
科研通智能强力驱动
Strongly Powered by AbleSci AI