Dana Diaconu,Lucian Petrică,Michaela Blott,Miriam Leeser
标识
DOI:10.1109/ipdpsw55747.2022.00022
摘要
This paper explores methods of improving hardware resource estimation for the implementation of Deep Neural Networks(DNN) on FPGAs using machine learning algorithms. Current approaches consider the DNN and High Level Synthesis (HLS) levels. At the DNN level, most techniques are strictly analytical, and based on rough approximations and FPGA DNN implementation assumptions. The aim of this work is to facilitate design space exploration by providing more accurate resource estimates before running time consuming processes such as High Level Synthesis (HLS) or logic synthesis. We integrated the algorithms in FINN, an end-to-end framework for building Quantized Neural Networks (QNN) FPGA inference accelerators, in order to evaluate and compare them to existing estimation as well as the actual synthesized design. We generated Support Vector Regression (SVR) models for LUT and BRAM estimation, the former yields promising results, while the latter consistently underperforms in comparison to HLS and analytical FINN estimates. Combining the analytical approach used in FINN with SVR LUT estimation provided more accurate results because on its own, SVR had insufficient extrapolation capability. For BRAM estimation, we improved the analytical approach by using a Decision Tree Classifier for predicting distributed or BRAM memory implementation.