In-Datacenter Performance Analysis of a Tensor Processing Unit

计算机科学 中央处理器 专用集成电路 并行计算 吞吐量 嵌入式系统 计算机硬件 操作系统 无线
作者
Norman P. Jouppi,Cliff Young,Nishant Patil,David A. Patterson,Gaurav Agrawal,Raminder Bajwa,S. C. Bates,Suresh Bhatia,Nan Boden,Al Borchers,Rick Boyle,Pierre-luc Cantin,Clifford Chao,Chris Clark,Jeremy Coriell,Mike Daley,Matt Dau,Jay B. Dean,Ben Gelb,Tara Vazir Ghaemmaghami,Rajendra Gottipati,William Gulland,Robert B. Hagmann,C. Richard Ho,Doug Hogberg,John Wei-Shan Hu,Robert Hundt,Dan Hurt,Julian Ibarz,Aaron Jaffey,Alek Jaworski,Alexander Kaplan,Harshit Khaitan,Daniel Killebrew,Andy Koch,Naveen Kumar,Steve Lacy,James Laudon,James Law,Diemthu Le,Chris Leary,Zhuyuan Liu,Kyle Lucke,Alan Lundin,Gordon MacKean,Adriana Maggiore,Maire Mahony,Kieran Miller,Rahul Nagarajan,Ravi Narayanaswami,Ray Ni,Kathy Nix,Thomas Norrie,Mark Omernick,Narayana Penukonda,Andy Phelps,Jonathan Ross,Matt Ross,Amir Salek Farrokhi,Emad Samadiani,Chris Severn,Gregory Sizikov,Matthew Snelham,Jed Souter,Dan Steinberg,Andy Swing,Mercedes Tan,Gregory Thorson,Bo Tian,Horia Toma,Erick Tuttle,Vijay Vasudevan,Richard Walter,Walter Wang,Eric Wilcox,Doe Hyun Yoon
标识
DOI:10.1145/3079856.3080246
摘要

Many architects believe that major improvements in cost-energy-performance must now come from domain-specific hardware. This paper evaluates a custom ASIC---called a Tensor Processing Unit (TPU) --- deployed in datacenters since 2015 that accelerates the inference phase of neural networks (NN). The heart of the TPU is a 65,536 8-bit MAC matrix multiply unit that offers a peak throughput of 92 TeraOps/second (TOPS) and a large (28 MiB) software-managed on-chip memory. The TPU's deterministic execution model is a better match to the 99th-percentile response-time requirement of our NN applications than are the time-varying optimizations of CPUs and GPUs that help average throughput more than guaranteed latency. The lack of such features helps explain why, despite having myriad MACs and a big memory, the TPU is relatively small and low power. We compare the TPU to a server-class Intel Haswell CPU and an Nvidia K80 GPU, which are contemporaries deployed in the same datacenters. Our workload, written in the high-level TensorFlow framework, uses production NN applications (MLPs, CNNs, and LSTMs) that represent 95% of our datacenters' NN inference demand. Despite low utilization for some applications, the TPU is on average about 15X -- 30X faster than its contemporary GPU or CPU, with TOPS/Watt about 30X -- 80X higher. Moreover, using the CPU's GDDR5 memory in the TPU would triple achieved TOPS and raise TOPS/Watt to nearly 70X the GPU and 200X the CPU.

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
2秒前
2秒前
2秒前
2389937250应助沐沐采纳,获得200
2秒前
陈伟霆发布了新的文献求助10
3秒前
dingz完成签到,获得积分0
6秒前
丢一池月光完成签到,获得积分10
6秒前
小张发布了新的文献求助10
8秒前
科研通AI2S应助卫卫采纳,获得10
9秒前
量子星尘发布了新的文献求助10
10秒前
10秒前
文静入学发布了新的文献求助10
11秒前
高兴的小虾米完成签到,获得积分10
12秒前
嗯嗯你说完成签到,获得积分10
13秒前
锦七完成签到,获得积分10
15秒前
CXSCXD完成签到,获得积分10
15秒前
优美从雪发布了新的文献求助10
15秒前
ww完成签到,获得积分10
16秒前
英俊的铭应助搞怪的外套采纳,获得10
18秒前
19秒前
远看寒山完成签到,获得积分10
20秒前
追寻平凡完成签到,获得积分20
20秒前
量子星尘发布了新的文献求助10
22秒前
22秒前
24秒前
nihao完成签到,获得积分20
25秒前
烟花应助wuxunxun2015采纳,获得10
26秒前
卷子卷子发布了新的文献求助10
26秒前
27秒前
阿米完成签到 ,获得积分10
27秒前
干饭宝发布了新的文献求助10
31秒前
猜不猜不发布了新的文献求助10
32秒前
33秒前
33秒前
大模型应助星鱼采纳,获得10
34秒前
34秒前
Rollei应助科研通管家采纳,获得10
34秒前
Rollei应助科研通管家采纳,获得10
34秒前
34秒前
34秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Introduction to strong mixing conditions volume 1-3 5000
Clinical Microbiology Procedures Handbook, Multi-Volume, 5th Edition 2000
从k到英国情人 1500
Ägyptische Geschichte der 21.–30. Dynastie 1100
„Semitische Wissenschaften“? 1100
Real World Research, 5th Edition 800
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 计算机科学 有机化学 物理 生物化学 纳米技术 复合材料 内科学 化学工程 人工智能 催化作用 遗传学 数学 基因 量子力学 物理化学
热门帖子
关注 科研通微信公众号,转发送积分 5734883
求助须知:如何正确求助?哪些是违规求助? 5356945
关于积分的说明 15327966
捐赠科研通 4879384
什么是DOI,文献DOI怎么找? 2621880
邀请新用户注册赠送积分活动 1571089
关于科研通互助平台的介绍 1527872