/* ---- Google Analytics Code Below */

Friday, July 01, 2022

We are Training Much Faster Now

Faster, better training is here outlined and described. 

We’re Training AI Twice as Fast This Year as Last New MLPerf rankings show training times plunging     By SAMUEL K. MOORE    in IEEE Spectrum

According to the best measures we’ve got, a set of benchmarks called MLPerf, machine-learning systems can be trained nearly twice as quickly as they could last year. It’s a figure that outstrips Moore’s Law, but also one we’ve come to expect. Most of the gain is thanks to software and systems innovations, but this year also gave the first peek at what some new processors, notably from Graphcore and Intel subsidiary Habana Labs, can do.

The once-crippling time it took to train a neural network to do its task is the problem that launched startups like Cerebras and SambaNova and drove companies like Google to develop machine-learning accelerator chips in house. But the new MLPerf data shows that training time for standard neural networks has gotten a lot less taxing in a short period of time. And that speedup has come from much more than just the advance of Moore’s Law.

This capability has only incentivized machine-learning experts to dream big. So the size of new neural networks continues to outpace computing power.

Called by some “the Olympics of machine learning,” MLPerf consists of eight benchmark tests: image recognition, medical-imaging segmentation, two versions of object detection, speech recognition, natural-language processing, recommendation, and a form of gameplay called reinforcement learning. (One of the object-detection benchmarks was updated for this round to a neural net that is closer to the state of the art.) Computers and software from 21 companies and institutions compete on any or all of the tests. This time around, officially called MLPerf Training 2.0, they collectively submitted 250 results.

Very few commercial and cloud systems were tested on all eight, but Nvidia director of product development for accelerated computing Shar Narasimhan gave an interesting example of why systems should be able to handle such breadth: Imagine a person with a smartphone snapping a photo of a flower and asking the phone: “What kind of flower is this?” It seems like a single request, but answering it would likely involve 10 different machine-learning models, several of which are represented in MLPerf.

To give a taste of the data, for each benchmark we’ve listed the fastest results for commercially available computers and cloud offerings (Microsoft Azure and Google Cloud) by how many machine-learning accelerators (usually GPUs) were involved. Keep in mind that some of these will be a category of one. For instance, there really aren’t that many places that can devote thousands of GPUs to a task. Likewise, there are some benchmarks where systems beat their nearest competitor by a matter of seconds or where five or more entries landed within a few minutes of each other. So if you’re curious about the nuances of AI performance, check out the complete list.  .... '   (much more at link) 


No comments: