/* ---- Google Analytics Code Below */

Thursday, January 14, 2021

Offline Reinforcement Learning

Technical, but relatively understandable, 

Offline Reinforcement Learning: How Conservative Algorithms Can Enable New Applications

Aviral Kumar and Avi Singh    Dec 7, 2020  in BAIR Berkeley

Deep reinforcement learning has made significant progress in the last few years, with success stories in robotic control, game playing and science problems. While RL methods present a general paradigm where an agent learns from its own interaction with an environment, this requirement for “active” data collection is also a major hindrance in the application of RL methods to real-world problems, since active data collection is often expensive and potentially unsafe. An alternative “data-driven” paradigm of RL, referred to as offline RL (or batch RL) has recently regained popularity as a viable path towards effective real-world RL. As shown in the figure below, offline RL requires learning skills solely from previously collected datasets, without any active environment interaction. It provides a way to utilize previously collected datasets from a variety of sources, including human demonstrations, prior experiments, domain-specific solutions and even data from different but related problems, to build complex decision-making engines. .... ' 

No comments: