现场可编程门阵列
查阅表格
计算机科学
人工神经网络
高级合成
设计空间探索
机器学习
人工智能
支持向量机
计算机工程
推论
计算机体系结构
嵌入式系统
程序设计语言
作者
Dana Diaconu,Lucian Petrică,Michaela Blott,Miriam Leeser
标识
DOI:10.1109/ipdpsw55747.2022.00022
摘要
This paper explores methods of improving hardware resource estimation for the implementation of Deep Neural Networks(DNN) on FPGAs using machine learning algorithms. Current approaches consider the DNN and High Level Synthesis (HLS) levels. At the DNN level, most techniques are strictly analytical, and based on rough approximations and FPGA DNN implementation assumptions. The aim of this work is to facilitate design space exploration by providing more accurate resource estimates before running time consuming processes such as High Level Synthesis (HLS) or logic synthesis. We integrated the algorithms in FINN, an end-to-end framework for building Quantized Neural Networks (QNN) FPGA inference accelerators, in order to evaluate and compare them to existing estimation as well as the actual synthesized design. We generated Support Vector Regression (SVR) models for LUT and BRAM estimation, the former yields promising results, while the latter consistently underperforms in comparison to HLS and analytical FINN estimates. Combining the analytical approach used in FINN with SVR LUT estimation provided more accurate results because on its own, SVR had insufficient extrapolation capability. For BRAM estimation, we improved the analytical approach by using a Decision Tree Classifier for predicting distributed or BRAM memory implementation.
科研通智能强力驱动
Strongly Powered by AbleSci AI