计算机科学
软件
集合(抽象数据类型)
超参数
设计空间探索
人工神经网络
计算机工程
人工智能
建筑
强化学习
太空探索
机器学习
计算机体系结构
计算机硬件
嵌入式系统
程序设计语言
工程类
视觉艺术
航空航天工程
艺术
作者
Weiwen Jiang,Lei Yang,Edwin H.‐M. Sha,Qingfeng Zhuge,Shouzhen Gu,Sakyasingha Dasgupta,Yiyu Shi,Jingtong Hu
标识
DOI:10.1109/tcad.2020.2986127
摘要
We propose a novel hardware and software co-exploration framework for efficient neural architecture search (NAS). Different from existing hardware-aware NAS which assumes a fixed hardware design and explores the NAS space only, our framework simultaneously explores both the architecture search space and the hardware design space to identify the best neural architecture and hardware pairs that maximize both test accuracy and hardware efficiency. Such a practice greatly opens up the design freedom and pushes forward the Pareto frontier between hardware efficiency and test accuracy for better design tradeoffs. The framework iteratively performs a two-level (fast and slow) exploration. Without lengthy training, the fast exploration can effectively fine-tune hyperparameters and prune inferior architectures in terms of hardware specifications, which significantly accelerates the NAS process. Then, the slow exploration trains candidates on a validation set and updates a controller using the reinforcement learning to maximize the expected accuracy together with the hardware efficiency. In this article, we demonstrate that the co-exploration framework can effectively expand the search space to incorporate models with high accuracy, and we theoretically show that the proposed two-level optimization can efficiently prune inferior solutions to better explore the search space. The experimental results on ImageNet show that the co-exploration NAS can find solutions with the same accuracy, 35.24% higher throughput, 54.05% higher energy efficiency, compared with the hardware-aware NAS.
科研通智能强力驱动
Strongly Powered by AbleSci AI