亲爱的研友该休息了!由于当前在线用户较少,发布求助请尽量完整的填写文献信息,科研通机器人24小时在线,伴您度过漫漫科研夜!身体可是革命的本钱,早点休息,好梦!

DFX: A Low-latency Multi-FPGA Appliance for Accelerating Transformer-based Text Generation

计算机科学 现场可编程门阵列 自动汇总 代码生成 设计空间探索 延迟(音频) 加速 变压器 瓶颈 并行计算 计算机硬件 计算机体系结构 嵌入式系统 人工智能 操作系统 电信 物理 量子力学 电压 钥匙(锁)
作者
Seongmin Hong,Seungjae Moon,Junsoo Kim,Sungjae Lee,Minsub Kim,Dongsoo Lee,Joo-Young Kim
标识
DOI:10.1109/micro56248.2022.00051
摘要

Transformer is a deep learning language model widely used for natural language processing (NLP) services in datacenters. Among transformer models, Generative Pretrained Transformer (GPT) has achieved remarkable performance in text generation, or natural language generation (NLG), which needs the processing of a large input context in the summarization stage, followed by the generation stage that produces a single word at a time. The conventional platforms such as GPU are specialized for the parallel processing of large inputs in the summarization stage, but their performance significantly degrades in the generation stage due to its sequential characteristic. Therefore, an efficient hardware platform is required to address the high latency caused by the sequential characteristic of text generation. In this paper, we present DFX, a multi-FPGA acceleration appliance that executes GPT-2 model inference end-to-end with low latency and high throughput in both summarization and generation stages. DFX uses model parallelism and optimized dataflow that is model-and-hardware-aware for fast simultaneous workload execution among devices. Its compute cores operate on custom instructions and provide GPT-2 operations end-to-end. We implement the proposed hardware architecture on four Xilinx Alveo U280 FPGAs and utilize all of the channels of the high bandwidth memory (HBM) and the maximum number of compute resources for high hardware efficiency. DFX achieves 5.58$\times$ speedup and 3.99$\times$ energy efficiency over four NVIDIA V100 GPUs on the modern GPT-2 model. DFX is also 8.21$\times$ more cost-effective than the GPU appliance, suggesting that it is a promising solution for text generation workloads in cloud datacenters.

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
大幅提高文件上传限制,最高150M (2024-4-1)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
13秒前
Polymer72应助科研通管家采纳,获得10
24秒前
Polymer72应助科研通管家采纳,获得10
24秒前
26秒前
1分钟前
1分钟前
重要元灵完成签到 ,获得积分10
1分钟前
吾系渣渣辉完成签到 ,获得积分10
1分钟前
1分钟前
1分钟前
Polymer72完成签到,获得积分0
1分钟前
黑球发布了新的文献求助10
2分钟前
Tethys完成签到 ,获得积分10
2分钟前
黑球完成签到,获得积分10
2分钟前
Polymer72应助科研通管家采纳,获得10
2分钟前
2分钟前
ght完成签到 ,获得积分10
2分钟前
3分钟前
3分钟前
3分钟前
He发布了新的文献求助10
3分钟前
He发布了新的文献求助10
3分钟前
He发布了新的文献求助10
3分钟前
CSun完成签到,获得积分10
3分钟前
CSun发布了新的文献求助10
3分钟前
高兴凝安完成签到 ,获得积分10
3分钟前
Polymer72应助科研通管家采纳,获得10
4分钟前
Polymer72应助科研通管家采纳,获得10
4分钟前
Polymer72应助科研通管家采纳,获得10
4分钟前
Polymer72应助科研通管家采纳,获得10
4分钟前
共享精神应助He采纳,获得10
4分钟前
Jasper应助He采纳,获得10
4分钟前
可爱的函函应助He采纳,获得10
4分钟前
彭于晏应助He采纳,获得10
4分钟前
kuyi完成签到 ,获得积分10
4分钟前
5分钟前
青出于蓝蔡完成签到,获得积分10
5分钟前
科研通AI2S应助垚祎采纳,获得10
5分钟前
gwbk完成签到,获得积分10
5分钟前
6分钟前
高分求助中
Production Logging: Theoretical and Interpretive Elements 2000
Very-high-order BVD Schemes Using β-variable THINC Method 1200
RNAの科学 ―時代を拓く生体分子― 金井 昭夫(編) 1000
BIOLOGY OF NON-CHORDATES 1000
进口的时尚——14世纪东方丝绸与意大利艺术 Imported Fashion:Oriental Silks and Italian Arts in the 14th Century 800
Autoregulatory progressive resistance exercise: linear versus a velocity-based flexible model 550
Education and Upward Social Mobility in China: Imagining Positive Sociology with Bourdieu 500
热门求助领域 (近24小时)
化学 医学 生物 材料科学 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 基因 遗传学 物理化学 催化作用 细胞生物学 免疫学 冶金
热门帖子
关注 科研通微信公众号,转发送积分 3353504
求助须知:如何正确求助?哪些是违规求助? 2978145
关于积分的说明 8683813
捐赠科研通 2659514
什么是DOI,文献DOI怎么找? 1456277
科研通“疑难数据库(出版商)”最低求助积分说明 674310
邀请新用户注册赠送积分活动 665020