Good direction.
We Can Train Big Neural Networks on Small Devices
IEEE Spectrum
Matthew Hutson, September 20, 2022
A new training method expands small devices' capabilities to train large neural networks, while potentially helping to protect privacy. The University of California, Berkeley's Shishir Patil and colleagues integrated offloading and rematerialization techniques using suboptimal heuristics to reduce memory requirements for training via the private optimal energy training (POET) system. Users feed POET a device's technical details and data on the architecture of a neural network they want to train, specifying memory and time budgets; the system generates a training process that minimizes energy usage. Defining the problem as a mixed integer linear programming challenge was critical to POET's effectiveness. Testing showed the system could slash memory usage by about 80% without significantly increasing energy consumption. ...
No comments:
Post a Comment