/* ---- Google Analytics Code Below */

Wednesday, June 24, 2020

A Domain-Specific Supercomputer for Training Deep Neural Networks

Good explanation of the phases of using computing power for these kinds of problems.

A Domain-Specific Supercomputer for Training Deep Neural Networks
By Norman P. Jouppi, Doe Hyun Yoon, George Kurian, Sheng Li, Nishant Patil, James Laudon, Cliff Young, David Patterson
Communications of the ACM, July 2020, Vol. 63 No. 7, Pages 67-78
10.1145/3360307

The recent success of deep neural networks (DNNs) has inspired a resurgence in domain specific architectures (DSAs) to run them, partially as a result of the deceleration of microprocessor performance improvement due to the slowing of Moore's Law.17 DNNs have two phases: training, which constructs accurate models, and inference, which serves those models. Google's Tensor Processing Unit (TPU) offered 50x improvement in performance per watt over conventional architectures for inference.19,20 We naturally asked whether a successor could do the same for training. This article explores how Google built the first production DSA for the much harder training problem, first deployed in 2017., TPU  ... " 

No comments: