Abstraction and approximate decision-theoretic planning
作者:
摘要
Markov decision processes (MDPs) have recently been proposed as useful conceptual models for understanding decision-theoretic planning. However, the utility of the associated computational methods remains open to question: most algorithms for computing optimal policies require explicit enumeration of the state space of the planning problem. We propose an abstraction technique for MDPs that allows approximately optimal solutions to be computed quickly. Abstractions are generated automatically, using an intensional representation of the planning problem (probabilistic strips rules) to determine the most relevant problem features and optimally solving a reduced problem based on these relevant features. The key features of our method are: abstractions can be generated quickly; the abstract solution can be applied directly to the original problem; and the loss of optimality can be bounded. We also describe methods by which the abstract solution can be viewed as a set of default reactions that can be improved incrementally, and used as a heuristic for search-based planning or other MDP methods. Finally, we discuss certain difficulties that point toward other forms of aggregation for MDPs.
论文关键词:Planning,Decision theory,Abstraction,Approximation,Search,Heuristics,Execution,Markov decision processes
论文评审过程:Available online 19 May 1998.
论文官网地址:https://doi.org/10.1016/S0004-3702(96)00023-9