Model selection in reinforcement learning
作者:Amir-massoud Farahmand, Csaba Szepesvári
摘要
We consider the problem of model selection in the batch (offline, non-interactive) reinforcement learning setting when the goal is to find an action-value function with the smallest Bellman error among a countable set of candidates functions. We propose a complexity regularization-based model selection algorithm, \(\ensuremath{\mbox{\textsc {BErMin}}}\), and prove that it enjoys an oracle-like property: the estimator’s error differs from that of an oracle, who selects the candidate with the minimum Bellman error, by only a constant factor and a small remainder term that vanishes at a parametric rate as the number of samples increases. As an application, we consider a problem when the true action-value function belongs to an unknown member of a nested sequence of function spaces. We show that under some additional technical conditions \(\ensuremath{\mbox{\textsc {BErMin}}}\) leads to a procedure whose rate of convergence, up to a constant factor, matches that of an oracle who knows which of the nested function spaces the true action-value function belongs to, i.e., the procedure achieves adaptivity.
论文关键词:Reinforcement learning, Model selection, Complexity regularization, Adaptivity, Offline learning, Off-policy learning, Finite-sample bounds
论文评审过程:
论文官网地址:https://doi.org/10.1007/s10994-011-5254-7