计算机科学
加速
修剪
并行计算
推论
现场可编程门阵列
量化(信号处理)
计算机工程
移植
利用
算法
软件
人工智能
计算机硬件
生物
计算机安全
程序设计语言
农学
作者
Keqi Fu,Zhi Qi,Jiaxuan Cai,Xulong Shi
标识
DOI:10.1145/3508352.3549368
摘要
As the extreme case of quantization networks, Binary Neural Networks (BNNs) have received tremendous attention due to many hardware-friendly properties in terms of storage and computation. To reach the limit of compact models, we attempt to combine binarization with pruning techniques, further exploring the redundancy of BNNs. However, coarse-grained pruning methods may cause server accuracy drops, while traditional fine-grained ones induce irregular sparsity hard to be utilized by hardware. In this paper, we propose two advanced fine-grained BNN pruning modules, i.e., structured channel-wise kernel pruning and dynamic spatial pruning, from a joint perspective of algorithm and hardware. The pruned BNN models are trained from scratch and present not only a higher precision but also a high degree of parallelism. Then, we develop an accelerator architecture that can effectively exploit the sparsity caused by our algorithm. Finally, we implement the pruned BNN models on an embedded FPGA (Ultra96v2). The results show that our software and hardware codesign achieves 5.4x inference-speedup than the baseline BNN, with higher resource and energy efficiency compared with prior FPGA implemented BNN works.
科研通智能强力驱动
Strongly Powered by AbleSci AI