Anyting that significantly decreases training time can be useful, especially for adaption to changes in updated data. Which also has likely similarities in data. A kind of priming the pump, which we also experimented with in older neural techniques. In ACM Tech News:
New Technique Cuts AI Training Time by More Than 60%
NC State News By Matt Shipman
North Carolina State University (NC State) researchers have developed a method that substantially reduces training time for deep learning networks without losing accuracy, expediting development of new artificial intelligence (AI) applications. According to NC State's Xipeng Shen, the Adaptive Deep Reuse technique has demonstrated a maximum AI training time reduction of 69%. The method is founded on the realization that many chunks of data in a dataset are similar to each other, which enables a deep learning network to save power by applying filters to one data chunk, then applying the results to all similar chunks in the same set. Said Shen, "We think Adaptive Deep Reuse is a valuable tool, and look forward to working with industry and research partners to demonstrate how it can be used to advance AI." ... "
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment