Intriguing, Agents learning from 'tabula rasa', from scratch. Makes me think of agent models we used to gather data and feed it to a process ... towards reinforcing movement towards an intelligence goal? thinking this.
In the Google AI Blog
Beyond Tabula Rasa: Reincarnating Reinforcement Learning
Posted by Rishabh Agarwal, Senior Research Scientist, and Max Schwarzer, Student Researcher, Google Research, Brain Team
Reinforcement learning (RL) is an area of machine learning that focuses on training intelligent agents using related experiences so they can learn to solve decision making tasks, such as playing video games, flying stratospheric balloons, and designing hardware chips. Due to the generality of RL, the prevalent trend in RL research is to develop agents that can efficiently learn tabula rasa, that is, from scratch without using previously learned knowledge about the problem. However, in practice, tabula rasa RL systems are typically the exception rather than the norm for solving large-scale RL problems. Large-scale RL systems, such as OpenAI Five, which achieves human-level performance on Dota 2, undergo multiple design changes (e.g., algorithmic or architectural changes) during their developmental cycle. This modification process can last months and necessitates incorporating such changes without re-training from scratch, which would be prohibitively expensive. ....
No comments:
Post a Comment