计算机科学
卷积神经网络
计算机工程
边缘设备
人工智能
边缘计算
移动设备
推论
软件
深度学习
GSM演进的增强数据速率
计算
分布式计算
机器学习
延迟(音频)
机器人学
移动边缘计算
人工神经网络
高效能源利用
云计算
机器人
算法
电信
电气工程
程序设计语言
工程类
操作系统
作者
Nitthilan Kannappan Jayakodi,Janardhan Rao Doppa,Partha Pratim Pande
标识
DOI:10.1109/iccad51958.2021.9643557
摘要
A huge number of edge applications including self-driving cars, mobile health, robotics, and augmented reality / virtual reality are enabled by deep neural networks (DNNs). Currently, much of this computation for these applications happens in the cloud, but there are several good reasons to perform the processing on local edge platforms such as smartphones: improved accessibility to different parts of the world, low latency, and data privacy. In this paper, we present a general hardware and software co-design framework for energy-efficient edge AI for both simple classification and structured output prediction tasks (e.g., 3D shapes from images). This framework relies on two key ideas. First, we design a space of DNNs of increasing complexity (coarse to fine) and perform input-specific adaptive inference by selecting a DNN of appropriate complexity depending on the hardness of input examples. Second, we execute the selected DNN on the target edge platform using a resource management policy to save energy. We also provide instantiations of our co-design framework for three qualitatively different problem settings: convolutional neural networks for image classification, graph convolutional networks for predicting 3D shapes from images, and generative adversarial networks on photo-realistic unconditional image generation. Our experiments on real-world benchmarks and mobile platforms show the effectiveness of our co-design framework in achieving significant gain in energy with little to no loss in accuracy of predictions.
科研通智能强力驱动
Strongly Powered by AbleSci AI