已入深夜,您辛苦了!由于当前在线用户较少,发布求助请尽量完整地填写文献信息,科研通机器人24小时在线,伴您度过漫漫科研夜!祝你早点完成任务,早点休息,好梦!

Algorithm-hardware Co-design of Attention Mechanism on FPGA Devices

计算机科学 现场可编程门阵列 核(代数) 稳健性(进化) 并行计算 矩形 嵌入式系统 计算机硬件 计算机工程 生物化学 化学 几何学 数学 组合数学 基因
作者
Xinyi Zhang,Yawen Wu,Peipei Zhou,Xulong Tang,Jingtong Hu
出处
期刊:ACM Transactions in Embedded Computing Systems [Association for Computing Machinery]
卷期号:20 (5s): 1-24 被引量:30
标识
DOI:10.1145/3477002
摘要

Multi-head self-attention (attention mechanism) has been employed in a variety of fields such as machine translation, language modeling, and image processing due to its superiority in feature extraction and sequential data analysis. This is benefited from a large number of parameters and sophisticated model architecture behind the attention mechanism. To efficiently deploy attention mechanism on resource-constrained devices, existing works propose to reduce the model size by building a customized smaller model or compressing a big standard model. A customized smaller model is usually optimized for the specific task and needs effort in model parameters exploration. Model compression reduces model size without hurting the model architecture robustness, which can be efficiently applied to different tasks. The compressed weights in the model are usually regularly shaped (e.g. rectangle) but the dimension sizes vary (e.g. differs in rectangle height and width). Such compressed attention mechanism can be efficiently deployed on CPU/GPU platforms as their memory and computing resources can be flexibly assigned with demand. However, for Field Programmable Gate Arrays (FPGAs), the data buffer allocation and computing kernel are fixed at run time to achieve maximum energy efficiency. After compression, weights are much smaller and different in size, which leads to inefficient utilization of FPGA on-chip buffer. Moreover, the different weight heights and widths may lead to inefficient FPGA computing kernel execution. Due to the large number of weights in the attention mechanism, building a unique buffer and computing kernel for each compressed weight on FPGA is not feasible. In this work, we jointly consider the compression impact on buffer allocation and the required computing kernel during the attention mechanism compressing. A novel structural pruning method with memory footprint awareness is proposed and the associated accelerator on FPGA is designed. The experimental results show that our work can compress Transformer (an attention mechanism based model) by 95x. The developed accelerator can fully utilize the FPGA resource, processing the sparse attention mechanism with the run-time throughput performance of 1.87 Tops in ZCU102 FPGA.

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
大气小天鹅完成签到 ,获得积分10
5秒前
7秒前
橡皮鱼完成签到,获得积分10
9秒前
坚守完成签到 ,获得积分10
11秒前
橡皮鱼发布了新的文献求助10
12秒前
Sunziy完成签到,获得积分10
13秒前
14秒前
vv完成签到 ,获得积分10
16秒前
聪慧的问筠完成签到 ,获得积分10
16秒前
18秒前
科研花完成签到 ,获得积分10
20秒前
三月发布了新的文献求助10
21秒前
cqhecq完成签到,获得积分10
21秒前
搜集达人应助旧残月采纳,获得10
21秒前
23秒前
Hello应助浙江嘉兴采纳,获得10
24秒前
morninglike发布了新的文献求助10
29秒前
who完成签到,获得积分20
30秒前
李明完成签到 ,获得积分10
33秒前
星星完成签到 ,获得积分10
36秒前
学者风范完成签到 ,获得积分10
40秒前
嗯哼发布了新的文献求助10
42秒前
43秒前
44秒前
旧残月发布了新的文献求助10
48秒前
Hana发布了新的文献求助10
48秒前
50秒前
50秒前
JamesPei应助科研通管家采纳,获得10
50秒前
51秒前
51秒前
共享精神应助科研通管家采纳,获得10
51秒前
53秒前
goodltl完成签到 ,获得积分10
53秒前
浙江嘉兴发布了新的文献求助10
54秒前
华仔应助sakkaku采纳,获得10
59秒前
CodeCraft应助想不到吧采纳,获得10
1分钟前
有趣的银完成签到,获得积分10
1分钟前
1分钟前
华仔应助坤桑采纳,获得10
1分钟前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Modern Epidemiology, Fourth Edition 5000
Handbook of pharmaceutical excipients, Ninth edition 5000
Digital Twins of Advanced Materials Processing 2000
Weaponeering, Fourth Edition – Two Volume SET 2000
Polymorphism and polytypism in crystals 1000
Signals, Systems, and Signal Processing 610
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 纳米技术 化学工程 生物化学 物理 计算机科学 内科学 复合材料 催化作用 物理化学 光电子学 电极 冶金 细胞生物学 基因
热门帖子
关注 科研通微信公众号,转发送积分 6020794
求助须知:如何正确求助?哪些是违规求助? 7622265
关于积分的说明 16165564
捐赠科研通 5168503
什么是DOI,文献DOI怎么找? 2766061
邀请新用户注册赠送积分活动 1748397
关于科研通互助平台的介绍 1636058