计算机科学
推论
延迟(音频)
深度学习
人工神经网络
深层神经网络
边缘设备
人工智能
移动设备
GSM演进的增强数据速率
边缘计算
公制(单位)
计算机工程
机器学习
操作系统
云计算
工程类
电信
运营管理
作者
Li Lyna Zhang,Shihao Han,Jianyu Wei,Ningxin Zheng,Ting Cao,Yuqing Yang,Yunxin Liu
标识
DOI:10.1145/3458864.3467882
摘要
With the recent trend of on-device deep learning, inference latency has become a crucial metric in running Deep Neural Network (DNN) models on various mobile and edge devices. To this end, latency prediction of DNN model inference is highly desirable for many tasks where measuring the latency on real devices is infeasible or too costly, such as searching for efficient DNN models with latency constraints from a huge model-design space. Yet it is very challenging and existing approaches fail to achieve a high accuracy of prediction, due to the varying model-inference latency caused by the runtime optimizations on diverse edge devices.
科研通智能强力驱动
Strongly Powered by AbleSci AI