Particularly important for complex, unstructured visual environments. For example in the smart home. Its part of the reason that generalized robotic and assistant solutions are difficult to do for the smart home. Article below is technical, but the introduction states the underlying problem well. The concept of an 'Imagined Goal' is intriguing.
Visual Reinforcement Learning with Imagined Goals By Vitchyr Pong∗ and Ashvin Nair∗
We want to build agents that can accomplish arbitrary goals in unstructured complex environments, such as a personal robot that can perform household chores. A promising approach is to use deep reinforcement learning, which is a powerful framework for teaching agents to maximize a reward function. However, the typical reinforcement learning paradigm involves training an agent to solve an individual task with a manually designed reward. For example, you might train a robot to set a dinner table by designing a reward function based on the distance between each plate or utensil and its goal location. This setup requires a person to design the reward function for each task, as well as extra systems like object detectors, which can be expensive and brittle. Moreover, if we want machines that can perform a large repertoire of chores, we would have to repeat this RL training procedure on each new task. .... "
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment