Good piece on the topic. Starts with an overview based on games, then does get technical. Still basically shows that this is hard, except in narrow contexts. But it may be the best way to get us to more general intelligence, More to follow and some good followup reading.
An introduction to Imitation Learning by Vitaly Kurin Aachen master student working on Reinforcement Learning.
Introduction to Imitation Learning
Learning from Demonstration: what has been done and the road ahead
" .... Why are we not there yet?
The typical machine learning approach is to train a model from scratch. Give it a million images and some time to figure it out. Give it a week and let it play Space Invaders until it reaches some acceptable score. We, as humans, beg to differ.
When a typical human starts to play some game he has never seen, he already has a huge amount of prior information. If he sees a door in Montezuma’s Revenge, he realizes that somewhere there should lie a key and he needs to find it. When he finds the key, he remembers that the closed door is back through the two previous rooms and he returns to open it. When he sees a ladder, he realizes that he can climb it because he has done this hundreds of time already.
What if we could somehow transfer human knowledge about the world to an agent? How can we extract all this information? How can we create a model out of it? There is such a way. It’s called Imitation Learning.
Imitation Learning is not the only name for leveraging human data for good. Some researchers also call it apprenticeship learning, others refer to it as Learning from Demonstration. From our point of view, there is no substantial difference between all of these titles and we will use Imitation Learning from now on. ... "
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment