计算机科学
现场可编程门阵列
计算
收缩阵列
高效能源利用
计算机体系结构
嵌入式系统
可重组计算
门阵列
并行计算
计算机硬件
算法
超大规模集成
电气工程
工程类
作者
Wenhua Ye,Xu Zhou,Joey Tianyi Zhou,Cen Chen,Kenli Li
出处
期刊:ACM Transactions in Embedded Computing Systems
[Association for Computing Machinery]
日期:2022-07-20
卷期号:22 (6): 1-22
被引量:16
摘要
Transformer model architectures have recently received great interest in natural language, machine translation, and computer vision, where attention mechanisms are their building blocks. However, the attention mechanism is expensive because of its intensive matrix computations and complicated data flow. The existing hardware architecture has some disadvantages for the computing structure of attention, such as inflexibility and low efficiency. Most of the existing papers accelerate attention by reducing the amount of computation through various pruning algorithms, which will affect the results to a certain extent with different sparsity. This paper proposes the hardware accelerator for the multi-head attention (MHA) on field-programmable gate arrays (FPGAs) with reconfigurable architecture, efficient systolic array, and hardware-friendly radix-2 softmax. We propose a novel method called Four inputs Processing Element (FPE) to double the computation rate of the data-aware systolic array (SA) and make it efficient and load balance. Especially, the computation framework is well designed to ensure the utilization of SA efficiently. Our design is evaluated on a Xilinx Alveo U250 card, and the proposed architecture achieves 51.3×, 17.3× improvement in latency, and 54.4×, 17.9× energy savings compared to CPU and GPU.
科研通智能强力驱动
Strongly Powered by AbleSci AI