/* ---- Google Analytics Code Below */

Monday, March 16, 2020

Learning New Ways to Continually Learn

Not quite I think of when I think of AGI (Artficial General Intelligence).    But sequences of useful tasks/learning can be seen as what humans do, provided they pay attention to both existing context and the changes in context introduced by the 'intelligence'. 

OpenAI’s Jeff Clune on deep learning’s Achilles’ heel and a faster path to AGI
 By Khari Johnson in Venturebeat

Neural networks learn differently from people. If a human comes back to a sport after years away, they might be rusty but they will still remember much of what they learned decades ago. A typical neural network, on the other hand, will forget the last thing it was trained to do. Virtually all neural networks today suffer from this “catastrophic forgetting.”

It’s the Achilles’ heel of machine learning, OpenAI research scientist Jeff Clune told VentureBeat, because it prevents machine learning systems from “continual learning,” the ability to remember previous tasks. But some systems can be taught to remember.

Before joining OpenAI last month to lead its multi-agent team, Clune worked with researchers from Uber AI Labs and the University of Vermont. This week, they collectively shared ANML (a neuromodulated meta-learning algorithm), which is able to learn 600 sequential tasks with minimal catastrophic forgetting.

“This is relatively unheard-of in machine learning. To my knowledge, it’s the longest sequence of tasks that AI has been able to do, and at the end of it, it’s still pretty good at all the tasks that it saw,” Clune said. “I think that these sorts of advances will be used in almost every situation where we use AI. It will just make AI better.”

Clune helped cofound Uber AI Labs in 2017, following the acquisition of Geometric Intelligence, and is one of seven coauthors of a paper called “Learning to Continually Learn” published Monday on arXiv.   https://arxiv.org/abs/2002.09571   ...... "

No comments: