DFX: A Low-latency Multi-FPGA Appliance for Accelerating Transformer-based Text Generation

计算机科学 现场可编程门阵列 自动汇总 代码生成 设计空间探索 延迟(音频) 加速 变压器 瓶颈 并行计算 计算机硬件 计算机体系结构 嵌入式系统 人工智能 操作系统 物理 电压 电信 量子力学 钥匙(锁)
作者
Seongmin Hong,Seungjae Moon,Junsoo Kim,Sungjae Lee,Minsub Kim,Dongsoo Lee,Joo-Young Kim
标识
DOI:10.1109/micro56248.2022.00051
摘要

Transformer is a deep learning language model widely used for natural language processing (NLP) services in datacenters. Among transformer models, Generative Pretrained Transformer (GPT) has achieved remarkable performance in text generation, or natural language generation (NLG), which needs the processing of a large input context in the summarization stage, followed by the generation stage that produces a single word at a time. The conventional platforms such as GPU are specialized for the parallel processing of large inputs in the summarization stage, but their performance significantly degrades in the generation stage due to its sequential characteristic. Therefore, an efficient hardware platform is required to address the high latency caused by the sequential characteristic of text generation. In this paper, we present DFX, a multi-FPGA acceleration appliance that executes GPT-2 model inference end-to-end with low latency and high throughput in both summarization and generation stages. DFX uses model parallelism and optimized dataflow that is model-and-hardware-aware for fast simultaneous workload execution among devices. Its compute cores operate on custom instructions and provide GPT-2 operations end-to-end. We implement the proposed hardware architecture on four Xilinx Alveo U280 FPGAs and utilize all of the channels of the high bandwidth memory (HBM) and the maximum number of compute resources for high hardware efficiency. DFX achieves 5.58$\times$ speedup and 3.99$\times$ energy efficiency over four NVIDIA V100 GPUs on the modern GPT-2 model. DFX is also 8.21$\times$ more cost-effective than the GPU appliance, suggesting that it is a promising solution for text generation workloads in cloud datacenters.

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
花小胖完成签到,获得积分10
刚刚
微笑惊蛰应助英俊亦巧采纳,获得20
刚刚
小火孩完成签到,获得积分10
1秒前
小蘑菇应助粗暴的夏天采纳,获得10
1秒前
简单的卿完成签到,获得积分10
2秒前
2秒前
小马甲应助端庄书雁采纳,获得10
2秒前
咕咚发布了新的文献求助10
2秒前
鱼刺鱼刺卡完成签到,获得积分10
2秒前
高子懿完成签到,获得积分10
2秒前
小土豆完成签到,获得积分10
2秒前
苹果树下的懒洋洋完成签到 ,获得积分10
3秒前
大萌发布了新的文献求助10
3秒前
00完成签到 ,获得积分10
3秒前
风清扬应助等待戈多采纳,获得30
3秒前
laiwei完成签到,获得积分10
4秒前
silin发布了新的文献求助10
4秒前
Wuyi完成签到,获得积分10
4秒前
Kins完成签到,获得积分10
4秒前
yoqiiy发布了新的文献求助10
4秒前
无花果应助Justtry采纳,获得10
4秒前
4秒前
晴天完成签到,获得积分10
4秒前
搜集达人应助ddd采纳,获得10
5秒前
san完成签到,获得积分10
5秒前
蓝天应助Ryan123采纳,获得10
6秒前
看文献的高光谱完成签到,获得积分10
6秒前
AC赵先生完成签到,获得积分10
6秒前
7秒前
shuyan完成签到,获得积分10
7秒前
搜集达人应助1256采纳,获得10
7秒前
希望天下0贩的0应助124578采纳,获得10
8秒前
社牛小柯完成签到,获得积分10
8秒前
罐罐儿完成签到,获得积分0
8秒前
田様应助果嘿嘿采纳,获得10
8秒前
8秒前
paprika完成签到,获得积分10
9秒前
9秒前
科目三应助youknowdcf采纳,获得10
9秒前
wanwei完成签到,获得积分10
10秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
List of 1,091 Public Pension Profiles by Region 1621
Les Mantodea de Guyane: Insecta, Polyneoptera [The Mantids of French Guiana] | NHBS Field Guides & Natural History 1500
Lloyd's Register of Shipping's Approach to the Control of Incidents of Brittle Fracture in Ship Structures 1000
Brittle fracture in welded ships 1000
Metagames: Games about Games 700
King Tyrant 680
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 计算机科学 有机化学 物理 生物化学 纳米技术 复合材料 内科学 化学工程 人工智能 催化作用 遗传学 数学 基因 量子力学 物理化学
热门帖子
关注 科研通微信公众号,转发送积分 5573719
求助须知:如何正确求助?哪些是违规求助? 4659992
关于积分的说明 14727079
捐赠科研通 4599835
什么是DOI,文献DOI怎么找? 2524518
邀请新用户注册赠送积分活动 1494863
关于科研通互助平台的介绍 1464959