In-Datacenter Performance Analysis of a Tensor Processing Unit
TM
Norman P. Jouppi, Cliff Young, Nishant Patil, David Patterson, Gaurav Agrawal, Raminder Bajwa, Sarah Bates, Suresh
Bhatia, Nan Boden, Al Borchers, Rick Boyle, Pierre-luc Cantin, Clifford Chao, Chris Clark, Jeremy Coriell, Mike Daley,
Matt Dau, Jeffrey Dean, Ben Gelb, Tara Vazir Ghaemmaghami, Rajendra Gottipati, William Gulland, Robert Hagmann, C.
Richard Ho, Doug Hogberg, John Hu, Robert Hundt, Dan Hurt, Julian Ibarz, Aaron Jaffey, Alek Jaworski, Alexander Kaplan,
Harshit Khaitan, Andy Koch, Naveen Kumar, Steve Lacy, James Laudon, James Law, Diemthu Le, Chris Leary, Zhuyuan
Liu, Kyle Lucke, Alan Lundin, Gordon MacKean, Adriana Maggiore, Maire Mahony, Kieran Miller, Rahul Nagarajan, Ravi
Narayanaswami, Ray Ni, Kathy Nix, Thomas Norrie, Mark Omernick, Narayana Penukonda, Andy Phelps, Jonathan Ross,
Matt Ross, Amir Salek, Emad Samadiani, Chris Severn, Gregory Sizikov, Matthew Snelham, Jed Souter, Dan Steinberg,
Andy Swing, Mercedes Tan, Gregory Thorson, Bo Tian, Horia Toma, Erick Tuttle, Vijay Vasudevan, Richard Walter, Walter
Wang, Eric Wilcox, and Doe Hyun Yoon
Google, Inc., Mountain View, CA USA
Email: {jouppi, cliffy, nishantpatil, davidpatterson} @google.com
To appear at the 44th International Symposium on Computer Architecture (ISCA), Toronto, Canada, June 26, 2017.
Abstract
Many architects believe that major improvements in cost-energy-performance must now come from domain-specific
hardware. This paper evaluates a custom ASIC—called a Tensor Processing Unit (TPU)
— deployed in datacenters
since 2015 that accelerates the inference phase of neural networks (NN). The heart of the TPU is a 65,536 8-bit MAC
matrix multiply unit that offers a peak throughput of 92 TeraOps/second (TOPS) and a large (28 MiB)
software-managed on-chip memory. The TPU’s deterministic execution model is a better match to the 99th-percentile
response-time requirement of our NN applications than are the time-varying optimizations of CPUs and GPUs
(caches, out-of-order execution, multithreading, multiprocessing, prefetching, …) that help average throughput more
than guaranteed latency. The lack of such features helps explain why, despite having myriad MACs and a big
memory, the TPU is relatively small and low power. We compare the TPU to a server-class Intel Haswell CPU and an
Nvidia K80 GPU, which are contemporaries deployed in the same datacenters. Our workload, written in the high-level
TensorFlow framework, uses production NN applications (MLPs, CNNs, and LSTMs) that represent 95% of our
datacenters’ NN inference demand. Despite low utilization for some applications, the TPU is on average about 15X -
30X faster than its contemporary GPU or CPU, with TOPS/Watt about 30X - 80X higher. Moreover, using the GPU’s
GDDR5 memory in the TPU would triple achieved TOPS and raise TOPS/Watt to nearly 70X the GPU and 200X the
CPU.
Index terms–DNN, MLP, CNN, RNN, LSTM, neural network, domain-specific architecture, accelerator
1. Introduction to Neural Networks
The synergy between the large data sets in the cloud and the numerous computers that power it has enabled a renaissance in
machine learning. In particular, deep neural networks
(DNNs) have led to breakthroughs such as reducing word error rates in
speech recognition by 30% over traditional approaches, which was the biggest gain in 20 years [Dea16]; cutting the error rate
in an image recognition competition since 2011 from 26% to 3.5% [Kri12] [Sze15] [He16]; and beating a human champion at
Go [Sil16].
Neural networks (NN) target brain-like functionality and are based on a simple artificial neuron: a nonlinear function
(such as max(0, value)) of a weighted sum of the inputs. These artificial neurons are collected into layers, with the
outputs of one layer becoming the inputs of the next one in the sequence. The “deep” part of DNN comes from going beyond
a few layers, as the large data sets in the cloud allowed more accurate models to be built by using extra and larger layers to
capture higher levels of patterns or concepts, and GPUs provided enough computing to develop them.
The two phases of NN are called training
(or learning) and inference
(or prediction), and they refer to development
versus production. The developer chooses the number of layers and the type of NN, and training determines the weights.
Virtually all training today is in floating point, which is one reason GPUs have been so popular. A step called quantization
transforms floating-point numbers into narrow integers—often just 8 bits—which are usually good enough for inference.
Eight-bit integer multiplies can be 6X less energy and 6X less area than IEEE 754 16-bit floating-point multiplies, and the
advantage for integer addition is 13X in energy and 38X in area [Dal16].
1