Adapting ML networks over time is an important idea. How much better will this be than simply recreating the networks with new data? Automating maintenance would make sure it is done. And note too the claim of 'exploring degrees of complexity' ... what is the value of that towards better or more resilient solutions? Scaling down towards richer solutions?
'Liquid' ML System Adapts to Changing Conditions
MIT News, Daniel Ackerman, January 28, 2021
A team of researchers from the Massachusetts Institute of Technology (MIT), the Institute of Science and Technology Austria, and the Vienna University of Technology in Austria has developed flexible algorithms, also known as "liquid" networks, that continually alter underlying equations to adapt to new data inputs. Unlike most neural networks, whose behaviors are fixed after training, a liquid network can adapt to the variability of real-world systems and is more resilient to unexpected or noisy data. Said MIT's Ramin Hasani, "Just changing the representation of a neuron, you can really explore some degrees of complexity you couldn't explore otherwise. The model itself is richer in terms of expressivity." The network outperformed other state-of-the-art time series algorithms by a few percentage points in predicting future values in datasets. Said Hasani, "Everyone talks about scaling up their network. We want to scale down, to have fewer but richer nodes." ...
No comments:
Post a Comment