The factored policy-gradient planner
作者:
Highlights:
•
摘要
We present an any-time concurrent probabilistic temporal planner (CPTP) that includes continuous and discrete uncertainties and metric functions. Rather than relying on dynamic programming our approach builds on methods from stochastic local policy search. That is, we optimise a parameterised policy using gradient ascent. The flexibility of this policy-gradient approach, combined with its low memory use, the use of function approximation methods and factorisation of the policy, allow us to tackle complex domains. This factored policy gradient (FPG) planner can optimise steps to goal, the probability of success, or attempt a combination of both. We compare the FPG planner to other planners on CPTP domains, and on simpler but better studied non-concurrent non-temporal probabilistic planning (PP) domains. We present FPG-ipc, the PP version of the planner which has been successful in the probabilistic track of the fifth international planning competition.
论文关键词:Concurrent probabilistic temporal planning,Reinforcement learning,Policy-gradient,AI planning
论文评审过程:Received 8 October 2007, Revised 31 October 2008, Accepted 9 November 2008, Available online 27 November 2008.
论文官网地址:https://doi.org/10.1016/j.artint.2008.11.008