Monte-Carlo tree search for Bayesian reinforcement learning
作者:Ngo Anh Vien, Wolfgang Ertel, Viet-Hung Dang, TaeChoong Chung
摘要
Bayesian model-based reinforcement learning can be formulated as a partially observable Markov decision process (POMDP) to provide a principled framework for optimally balancing exploitation and exploration. Then, a POMDP solver can be used to solve the problem. If the prior distribution over the environment’s dynamics is a product of Dirichlet distributions, the POMDP’s optimal value function can be represented using a set of multivariate polynomials. Unfortunately, the size of the polynomials grows exponentially with the problem horizon. In this paper, we examine the use of an online Monte-Carlo tree search (MCTS) algorithm for large POMDPs, to solve the Bayesian reinforcement learning problem online. We will show that such an algorithm successfully searches for a near-optimal policy. In addition, we examine the use of a parameter tying method to keep the model search space small, and propose the use of nested mixture of tied models to increase robustness of the method when our prior information does not allow us to specify the structure of tied models exactly. Experiments show that the proposed methods substantially improve scalability of current Bayesian reinforcement learning methods.
论文关键词:Bayesian reinforcement learning, Model-based reinforcement learning, Monte-Carlo tree search, POMDP
论文评审过程:
论文官网地址:https://doi.org/10.1007/s10489-012-0416-2