/* ---- Google Analytics Code Below */

Thursday, January 02, 2020

New Scaling Approach for Deep Learning

Fast and effective training is important, especially for many IOT and edge devices.  So a potential advance here.

Deep Learning breakthrough made by Rice University scientists
Rice University's MACH training system scales further than previous approaches.    Jim Salter in ArsTechnica

In an earlier deep learning article, we talked about how inference workloads—the use of already-trained neural networks to analyze data—can run on fairly cheap hardware, but running the training workload that the neural network "learns" on is orders of magnitude more expensive.

In particular, the more potential inputs you have to an algorithm, the more out of control your scaling problem gets when analyzing its problem space. This is where MACH, a research project authored by Rice University's Tharun Medini and Anshumali Shrivastava, comes in. MACH is an acronym for Merged Average Classifiers via Hashing, and according to lead researcher Shrivastava, "[its] training times are about 7-10 times faster, and... memory footprints are 2-4 times smaller" than those of previous large-scale deep learning techniques.  .... "

No comments: