/* ---- Google Analytics Code Below */

Friday, December 04, 2020

Shrinking BERT Networks to Model Language

Considerable shrinking of neural networks, more likely for applications at the Edge.

A new approach could lower computing costs and increase accessibility to state-of-the-art natural language processing.

Daniel Ackerman | MIT News Office

Researchers at the Massachusetts Institute of Technology (MIT), the University of Texas at Austin, and the MIT-IBM Watson Artificial Intelligence Laboratory identified lean subnetworks within a state-of-the-art neural network approach to natural language processing (NLP). These subnetworks, found in the Bidirectional Encoder Representations from Transformers (BERT) network, could potentially enable more users to develop NLP tools using less bulky and more efficient systems, like smartphones. BERT is trained by repeatedly attempting to fill in words omitted from a passage of writing, using a massive dataset; users can then refine its neural network to a specific task. By iteratively trimming parameters from the BERT model, then comparing the new subnetwork's performance to that of the original model, the team found effective subnetworks that were 40% to 90% leaner, and required no task-specific fine-tuning to identify "winning ticket" subnetworks that executed tasks successfully.  .... 

No comments: