Softmax函数
计算机科学
加速
推论
规范化(社会学)
变压器
高效能源利用
计算
量化(信号处理)
计算机工程
软件
算法
并行计算
人工智能
计算机硬件
工程类
深度学习
电压
社会学
电气工程
程序设计语言
人类学
作者
Wenxun Wang,Shuchang Zhou,Wenyu Sun,Peiqin Sun,Yongpan Liu
标识
DOI:10.1109/iccad57390.2023.10323725
摘要
Transformers have shown remarkable performance in both natural language processing (NLP) and computer vision (CV) tasks. However, their real-time inference speed and efficiency are limited due to the inefficiency in Softmax and Layer Normalization (LayerNorm). Previous works based on function approximation suffer from inefficient implementation as they place emphasis on computation while disregarding memory overhead concerns. Moreover, such methods rely on retraining to compensate for approximation error which can be costly and inconvenient. In this paper, we present SOLE, a hardware-software co-design for Softmax and LayerNorm which is composed of E2Softmax and AILayerNorm. E2Softmax utilizes log2 quantization of exponent function and log-based division to approximate Softmax while AILayerNorm adopts low-precision statistic calculation. Compared with state-of-the-art designs, we achieve both low-precision calculation and low bit-width storage on Softmax and LayerNorm. Experiments show that SOLE maintains inference accuracy without retraining while offering orders of magnitude speedup and energy savings over GPU, achieving 3.04×, 3.86× energy-efficiency improvements and 2.82×, 3.32× area-efficiency improvements over prior state-of-the-art custom hardware for Softmax and LayerNorm, respectively.
科研通智能强力驱动
Strongly Powered by AbleSci AI