亲爱的研友该休息了!由于当前在线用户较少,发布求助请尽量完整地填写文献信息,科研通机器人24小时在线,伴您度过漫漫科研夜!身体可是革命的本钱,早点休息,好梦!

Algorithm-hardware Co-design of Attention Mechanism on FPGA Devices

计算机科学 现场可编程门阵列 核(代数) 稳健性(进化) 并行计算 矩形 嵌入式系统 计算机硬件 计算机工程 生物化学 化学 几何学 数学 组合数学 基因
作者
Xinyi Zhang,Yawen Wu,Peipei Zhou,Xulong Tang,Jingtong Hu
出处
期刊:ACM Transactions in Embedded Computing Systems [Association for Computing Machinery]
卷期号:20 (5s): 1-24 被引量:30
标识
DOI:10.1145/3477002
摘要

Multi-head self-attention (attention mechanism) has been employed in a variety of fields such as machine translation, language modeling, and image processing due to its superiority in feature extraction and sequential data analysis. This is benefited from a large number of parameters and sophisticated model architecture behind the attention mechanism. To efficiently deploy attention mechanism on resource-constrained devices, existing works propose to reduce the model size by building a customized smaller model or compressing a big standard model. A customized smaller model is usually optimized for the specific task and needs effort in model parameters exploration. Model compression reduces model size without hurting the model architecture robustness, which can be efficiently applied to different tasks. The compressed weights in the model are usually regularly shaped (e.g. rectangle) but the dimension sizes vary (e.g. differs in rectangle height and width). Such compressed attention mechanism can be efficiently deployed on CPU/GPU platforms as their memory and computing resources can be flexibly assigned with demand. However, for Field Programmable Gate Arrays (FPGAs), the data buffer allocation and computing kernel are fixed at run time to achieve maximum energy efficiency. After compression, weights are much smaller and different in size, which leads to inefficient utilization of FPGA on-chip buffer. Moreover, the different weight heights and widths may lead to inefficient FPGA computing kernel execution. Due to the large number of weights in the attention mechanism, building a unique buffer and computing kernel for each compressed weight on FPGA is not feasible. In this work, we jointly consider the compression impact on buffer allocation and the required computing kernel during the attention mechanism compressing. A novel structural pruning method with memory footprint awareness is proposed and the associated accelerator on FPGA is designed. The experimental results show that our work can compress Transformer (an attention mechanism based model) by 95x. The developed accelerator can fully utilize the FPGA resource, processing the sparse attention mechanism with the run-time throughput performance of 1.87 Tops in ZCU102 FPGA.

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
遇见完成签到 ,获得积分10
刚刚
7秒前
akakns发布了新的文献求助10
13秒前
akakns完成签到,获得积分10
17秒前
共享精神应助little forest采纳,获得10
21秒前
25秒前
29秒前
楚寒完成签到 ,获得积分10
38秒前
小蘑菇应助出云天花采纳,获得10
42秒前
46秒前
ding应助QDL采纳,获得10
46秒前
pinecone发布了新的文献求助10
46秒前
50秒前
51秒前
57秒前
58秒前
醉熏的井发布了新的文献求助10
1分钟前
拼搏的松鼠完成签到,获得积分10
1分钟前
1分钟前
整齐的忆彤完成签到,获得积分10
1分钟前
李泷完成签到 ,获得积分10
1分钟前
1分钟前
寂寞的尔丝完成签到 ,获得积分10
1分钟前
1分钟前
1分钟前
热心一江完成签到,获得积分10
1分钟前
桐桐应助熙熙攘攘采纳,获得10
1分钟前
情怀应助科研通管家采纳,获得10
1分钟前
L_BD应助科研通管家采纳,获得10
1分钟前
积极的千易完成签到,获得积分10
1分钟前
热心一江发布了新的文献求助10
1分钟前
情怀应助潇洒从阳采纳,获得10
1分钟前
QDL发布了新的文献求助10
1分钟前
赵赵完成签到 ,获得积分10
1分钟前
1分钟前
老仙翁完成签到,获得积分10
1分钟前
Echo完成签到 ,获得积分10
1分钟前
QDL完成签到,获得积分10
1分钟前
乐观的大叔完成签到 ,获得积分10
1分钟前
qqq完成签到,获得积分10
1分钟前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Modern Epidemiology, Fourth Edition 5000
Handbook of pharmaceutical excipients, Ninth edition 5000
Digital Twins of Advanced Materials Processing 2000
Weaponeering, Fourth Edition – Two Volume SET 2000
Polymorphism and polytypism in crystals 1000
Signals, Systems, and Signal Processing 610
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 纳米技术 化学工程 生物化学 物理 计算机科学 内科学 复合材料 催化作用 物理化学 光电子学 电极 冶金 细胞生物学 基因
热门帖子
关注 科研通微信公众号,转发送积分 6020872
求助须知:如何正确求助?哪些是违规求助? 7624338
关于积分的说明 16165807
捐赠科研通 5168683
什么是DOI,文献DOI怎么找? 2766126
邀请新用户注册赠送积分活动 1748570
关于科研通互助平台的介绍 1636127