Thursday, April 13, 2017

In-Datacenter Performance Analysis of a Tensor Processing Unit​ (TM)

Norm Jouppi mentioned it in a Google blog entry a few days. He and his team have released some information about the TPU. This is very interesting because of how algorithms are changing hardware. Sure enough we know that NVIDIA has shifted its architecture to follow the successful Deep Neural networks development (and continues to do so) but when Google announced the TPU last year one could only wonder what they would be doing better than a chip maker.    wrote about this question in  Does Google’s TPU Investment Make Sense Going Forward? but I believe he misses the point entirely.

That point was made abundantly clear in a Wired article:
It’s not used to train the neural network beforehand. But as Jouppi explains, even that still saves the company quite a bit. It didn’t have to build, say, an extra 15 data centers
The reason Google did not wait for NVIDIA's architecture to change or for Moore's law to kick in (that is using OPM to do the work) is mostly because that hardware technology effort spare them a few bucks. It all comes down to the fact that we are all collectively not fast enough to make sense of data.

Here is Norm et al's paper:  In-Datacenter Performance Analysis of a Tensor Processing Unit​ (TM) by Norman P. Jouppi, Cliff Young, Nishant Patil, David Patterson, Gaurav Agrawal, Raminder Bajwa, Sarah Bates, Suresh Bhatia, Nan Boden, Al Borchers, Rick Boyle, Pierre-luc Cantin, Clifford Chao, Chris Clark, Jeremy Coriell, Mike Daley, Matt Dau, Jeffrey Dean, Ben Gelb, Tara Vazir Ghaemmaghami, Rajendra Gottipati, William Gulland, Robert Hagmann, C. Richard Ho, Doug Hogberg, John Hu, Robert Hundt, Dan Hurt, Julian Ibarz, Aaron Jaffey, Alek Jaworski, Alexander Kaplan, Harshit Khaitan, Andy Koch, Naveen Kumar, Steve Lacy, James Laudon, James Law, Diemthu Le, Chris Leary, Zhuyuan Liu, Kyle Lucke, Alan Lundin, Gordon MacKean, Adriana Maggiore, Maire Mahony, Kieran Miller, Rahul Nagarajan, Ravi Narayanaswami, Ray Ni, Kathy Nix, Thomas Norrie, Mark Omernick, Narayana Penukonda, Andy Phelps, Jonathan Ross, Matt Ross, Amir Salek, Emad Samadiani, Chris Severn, Gregory Sizikov, Matthew Snelham, Jed Souter, Dan Steinberg, Andy Swing, Mercedes Tan, Gregory Thorson, Bo Tian, Horia Toma, Erick Tuttle, Vijay Vasudevan, Richard Walter, Walter Wang, Eric Wilcox, and Doe Hyun Yoon

Many architects believe that major improvements in cost-energy-performance must now come from domain-specific hardware. This paper evaluates a custom ASIC—called a ​Tensor Pro​cessing Unit (TPU)— deployed in datacenters since 2015 that accelerates the inference phase of neural networks (NN). The heart of the TPU is a 65,536 8-bit MAC matrix multiply unit that offers a peak throughput of 92 TeraOps/second (TOPS) and a large (28 MiB) software-managed on-chip memory. The TPU’s deterministic execution model is a better match to the 99th-percentile response-time requirement of our NN applications than are the time-varying optimizations of CPUs and GPUs (caches, out-of-order execution, multithreading, multiprocessing, prefetching, ...) that help average throughput more than guaranteed latency. The lack of such features helps explain why, despite having myriad MACs and a big memory, the TPU is relatively small and low power. We compare the TPU to a server-class Intel Haswell CPU and an Nvidia K80 GPU, which are contemporaries deployed in the same datacenters. Our workload, written in the high-level TensorFlow framework, uses production NN applications (MLPs, CNNs, and LSTMs) that represent 95% of our datacenters’ NN inference demand. Despite low utilization for some applications, the TPU is on average about 15X - 30X faster than its contemporary GPU or CPU, with TOPS/Watt about 30X - 80X higher. Moreover, using the GPU’s GDDR5 memory in the TPU would triple achieved TOPS and raise TOPS/Watt to nearly 70X the GPU and 200X the CPU.







Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

1 comment:

SeanVN said...

The specialized hardware these days tends to use 8 bit or less precision for speed.
Some of the early genetic algorithms used bit flipping as a mutation. That sounds naive however really it is a sort of scale free mutation. Where a mutation of an 8 bit unsigned number might be 1,2,4,8,16,32,64,128. A sort of exponential distribution.
You can compare that to the scale free mutation random + or - exp(-c*rnd()) where c is a positive number and rnd() returns 0 to 1 uniform. That mutation has uniform density across magnitude p(1)=p(0.1)=p(0.01) etc.
So I think random bit flipping is something you can try if you want to evolve deep neural nets of low precision, especially as back propagation is more problematic in such cases.
I'm sure I gave this reference before: https://pdfs.semanticscholar.org/c980/dc8942b4d058be301d463dc3177e8aab850e.pdf

Printfriendly