/* ---- Google Analytics Code Below */

Friday, November 18, 2022

Reports on Computing, Big and Small

Training and More

New Records for the Biggest and Smallest AI Computers Nvidia H100 and Intel Sapphire Rapids Xeon debut on ML Perf training benchmarks   By SAMUEL K. MOORE  in Spectrum IEEE 

The machine-learning consortium MLCommons released the latest set of benchmark results last week, offering a glimpse at the capabilities of new chips and old as they tackled executing lightweight AI on the tiniest systems and training neural networks at both server and supercomputer scales. The benchmark tests saw the debut of new chips from Intel and Nvidia as well as speed boosts from software improvements and predictions that new software will play a role in speeding the new chips in the years after their debut.

Training Servers

Training AI has been a problem that’s driven billions of dollars in investment, and it seems to be paying off. “A few years ago we were talking about training these networks in days or weeks, now we’re talking about minutes,” says Dave Salvator, director of product marketing at Nvidia.

There are eight benchmarks in the MLPerf training suite, but here I’m showing results from just two—image classification and natural-language processing—because although they don’t give a complete picture, they’re illustrative of what’s happening. Not every company puts up benchmark results every time; in the past, systems from Baidu, Google, Graphcore, and Qualcomm have made marks, but none of these were on the most recent list. And there are companies whose goal is to train the very biggest neural networks, such as Cerebras and SambaNova, that have never participated.

Another note about the results I’m showing—they are incomplete. To keep the eye glazing to a minimum, I’ve listed only the fastest system of each configuration. There were already four categories in the main “closed” contest: cloud (self-evident), on premises (systems you could buy and install in-house right now), preview (systems you can buy soon but not now), and R&D (interesting but odd, so I excluded them). I then listed the fastest training result for each category for each configuration—the number of accelerators in a computer. If you want to see the complete list, it’s at the MLCommons website.  .... ' 

No comments: