Forgetting in Neural Networks. Intro is good, well worth understanding the topic, then becomes technical.
IBM Uses Continual Learning to Avoid The Amnesia Problem in Neural Networks
Tags: IBM, Learning, Neural Networks, Training in KDNuggets
Using continual learning might avoid the famous catastrophic forgetting problem in neural networks.
SAS AI/ML Training
By Jesus Rodriguez, Intotheblock.
I often joke that neural networks suffers from a continuous amnesia problem in the sense that they every time they are retrained they lost the knowledge accumulated in previous iterations. Building neural networks that can learn incrementally without forgetting is one of the existential challenges facing the current generation of deep learning solutions. Over a year ago, researchers from IBM published a paper proposing a method for continual learning proposing that allow the implementation of neural networks that can build incremental knowledge.
Neural networks have achieved impressive milestones in the last few years from beating Go to multi-player games. However, neural network architectures remain constrained to very specific domains and unable to transition its knowledge into new areas. Furthermore, current neural network models are only effective if trained over large stationary distributions of data and struggle when training over changing non-stationary distributions of data. In other words, neural networks can effectively solve many tasks when trained from scratch and continually sample from all tasks many times until training has converged. Meanwhile, they struggle when training incrementally if there is a time dependence to the data received. Paradoxically, most real world AI scenarios are based on incremental, and not stationary, knowledge. Throughout the history of artificial intelligence(AI), there have been several theories and proposed models to deal with the continual learning challenge. ... "
No comments:
Post a Comment