Bayesian optimistic Kullback–Leibler exploration
作者:Kanghoon Lee, Geon-Hyeong Kim, Pedro Ortega, Daniel D. Lee, Kee-Eung Kim
摘要
We consider a Bayesian approach to model-based reinforcement learning, where the agent uses a distribution of environment models to find the action that optimally trades off exploration and exploitation. Unfortunately, it is intractable to find the Bayes-optimal solution to the problem except for restricted cases. In this paper, we present BOKLE, a simple algorithm that uses Kullback–Leibler divergence to constrain the set of plausible models for guiding the exploration. We provide a formal analysis that this algorithm is near Bayes-optimal with high probability. We also show an asymptotic relation between the solution pursued by BOKLE and a well-known algorithm called Bayesian exploration bonus. Finally, we show experimental results that clearly demonstrate the exploration efficiency of the algorithm.
论文关键词:Model-based Bayesian reinforcement learning, Bayes-adaptive Markov decision process, PAC-BAMDP
论文评审过程:
论文官网地址:https://doi.org/10.1007/s10994-018-5767-4