计算机科学
软件部署
加速
GSM演进的增强数据速率
启发式
过程(计算)
边缘设备
深层神经网络
人工神经网络
计算机体系结构
软件工程
人工智能
并行计算
操作系统
云计算
作者
Zheyu Yan,Yifan Qin,Xiaobo Sharon Hu,Yiyu Shi
标识
DOI:10.1109/socc58585.2023.10256783
摘要
Deep Neural Networks (DNNs) have demonstrated impressive performance across a wide range of tasks. However, deploying DNNs on edge devices poses significant challenges due to stringent power and computational budgets. An effective solution to this issue is software-hardware (SW-HW) co-design, which allows for the tailored creation of DNN models and hardware architectures that optimally utilize available resources. However, SW-HW co-design traditionally suffers from slow optimization speeds because their optimizers do not make use of heuristic knowledge, also known as the "cold start" problem. In this study, we present a novel approach that leverages Large Language Models (LLMs) to address this issue. By utilizing the abundant knowledge of pre-trained LLMs in the co-design optimization process, we effectively bypass the cold start problem, substantially accelerating the design process. The proposed method achieves a significant speedup of 25x. This advancement paves the way for the rapid and efficient deployment of DNNs on edge devices.
科研通智能强力驱动
Strongly Powered by AbleSci AI