卷积神经网络
计算机科学
加速度
人工智能
物理
经典力学
标识
DOI:10.1109/bibe60311.2023.00022
摘要
Despite the progress made by convolutional neural networks (CNNs), optimization for training them remains an important problem. This paper reviews and evaluates existing acceleration methods for CNNs and their potential for domain-specific optimization and customization of FPGA accelerators. For example, the VGG11 network can achieve an average accuracy of 91. 48% in the CIFAR-10 data set while consuming only 1.56w of power, while FINN can achieve comparable performance (88.74%) with only 1.16 mv. This paper provides valuable contributions and prospects for the FPGA-accelerated implementation of CNNs, as well as offers guidance and ideas for research and development in hardware with the AI field.
科研通智能强力驱动
Strongly Powered by AbleSci AI