Softmax函数
计算机科学
数据路径
瓶颈
延迟(音频)
变压器
语言模型
现场可编程门阵列
并行计算
人工智能
人工神经网络
计算机硬件
嵌入式系统
电信
物理
量子力学
电压
作者
Nazim Altar Koca,Anh Tuan,Chip-Hong Chang
标识
DOI:10.1109/iscas46773.2023.10181465
摘要
Self-attention networks such as Transformer have become state-of-the-art models for natural language processing (NLP) problems. Softmax function, which serves as a normalizer to produce attention scores, turns out to be a severe throughput and latency bottleneck of a Transformer network. Softmax datapath consists of data-dependent sequential nonlinear exponentiation and division operations, which are not amenable to pipelining and parallelism, nor can they be directly linearized for pretrained models without substantial accuracy drop. In this paper, we proposed a hardware efficient Softmax approximation which can be used as a direct plug-in substitution into pretrained transformer network to accelerate NLP tasks without compromising its accuracy. Experiment results on FPGA implementation show that our design outperforms vanilla Softmax designed using Xilinx IPs with 15x less LUTs, 55x less registers and 23x lower latency at similar clock frequency and less than 1% accuracy drop on main language benchmark tasks. We also propose a pruning method to reduce the input entropy of Softmax for NLP problems with high number of inputs. It was validated on CoLA task to achieve a further 25% reduction of latency.
科研通智能强力驱动
Strongly Powered by AbleSci AI