In this paper, nearly 40 commonly used deep neural network(DNN) models are selected, and their cross-platform and cross-inference frameworks are deeply analysed. The main metrics of accuracy, the total number of model parameters, the computational complexity, the accuracy density, the inference time, the memory consumption and other related parameters are used to measure their performance. The heterogeneous computing experiment is implemented on both the Google Colab cloud computing platform and the Jetson Nano embedded edge computing platform. The obtained performance is compared with that of two previous computing platforms: a workstation equipped with an NVIDIA Titan X Pascal and an embedded system based on an NVIDIA Jetson TX1 board. In addition, on the Jetson Nano embedded edge computing platform, different inference frameworks are investigated to evaluate the inference efficiency of the DNN models. Regression models are established to characterize the variation in the computing performance of different DNN classification algorithms so that the inference results of unknown models can be estimated. ANOVA methods are proposed to quantify the differences between models. The experimental results have important guiding significance for the better selection, deployment and application of DNN models in practice. Codes are available at this https URL https://github.com/Foreverzfy/Model-Test.