计算机科学
现场可编程门阵列
浮点型
卷积神经网络
计算机硬件
并行计算
舍入
硬件加速
算法
人工智能
操作系统
作者
Xiaocong Lian,Zhenyu Liu,Zhourui Song,Jiwu Dai,Wei Zhou,Xiangyang Ji
出处
期刊:IEEE Transactions on Very Large Scale Integration Systems
[Institute of Electrical and Electronics Engineers]
日期:2019-08-01
卷期号:27 (8): 1874-1885
被引量:115
标识
DOI:10.1109/tvlsi.2019.2913958
摘要
Convolutional neural networks (CNNs) are widely used and have achieved great success in computer vision and speech processing applications. However, deploying the large-scale CNN model in the embedded system is subject to the constraints of computation and memory. An optimized block-floating-point (BFP) arithmetic is adopted in our accelerator for efficient inference of deep neural networks in this paper. The feature maps and model parameters are represented in 16-bit and 8-bit formats, respectively, in the off-chip memory, which can reduce memory and off-chip bandwidth requirements by 50% and 75% compared to the 32-bit FP counterpart. The proposed 8-bit BFP arithmetic with optimized rounding and shifting-operation-based quantization schemes improves the energy and hardware efficiency by three times. One CNN model can be deployed in our accelerator without retraining at the cost of an accuracy loss of not more than 0.12%. The proposed reconfigurable accelerator with three parallelism dimensions, ping-pong off-chip DDR3 memory access, and an optimized on-chip buffer group is implemented on the Xilinx VC709 evaluation board. Our accelerator achieves a performance of 760.83 GOP/s and 82.88 GOP/s/W under a 200-MHz working frequency, significantly outperforming previous accelerators.
科研通智能强力驱动
Strongly Powered by AbleSci AI