Abstraction from demonstration for efficient reinforcement learning in high-dimensional domains

作者:

摘要

Reinforcement learning (RL) and learning from demonstration (LfD) are two popular families of algorithms for learning policies for sequential decision problems, but they are often ineffective in high-dimensional domains unless provided with either a great deal of problem-specific domain information or a carefully crafted representation of the state and dynamics of the world. We introduce new approaches inspired by these two techniques, which we broadly call abstraction from demonstration. Our first algorithm, state abstraction from demonstration (AfD), uses a small set of human demonstrations of the task the agent must learn to determine a state-space abstraction. Our second algorithm, abstraction and decomposition from demonstration (ADA), is additionally able to determine a task decomposition from the demonstrations. These abstractions allow RL to scale up to higher-complexity domains, and offer much better performance than LfD with orders of magnitude fewer demonstrations. Using a set of videogame-like domains, we demonstrate that using abstraction from demonstration can obtain up to exponential speed-ups in table-based representations, and polynomial speed-ups when compared with function approximation-based RL algorithms such as fitted Q-learning and LSPI.

论文关键词:Reinforcement learning,Learning from demonstration,Dimensionality reduction,Function approximation

论文评审过程:Received 21 April 2013, Revised 9 July 2014, Accepted 12 July 2014, Available online 18 July 2014.

论文官网地址:https://doi.org/10.1016/j.artint.2014.07.003