Learning by Failing to Explain: Using Partial Explanations to Learn in Incomplete or Intractable Domains
作者:Robert J. Hall
摘要
Explanation-based learning depends on having an explanation on which to base generalization. Thus, a system with an incomplete or intractable domain theory cannot use this method to learn from every precedent. However, in such cases the system need not resort to purely empirical generalization methods, because it may already know almost everything required to explain the precedent. Learning by failing to explain is a method that uses current knowledge to prune the well-understood portions of complex precedents (and rules) so that what remains may be conjectured as a new rule. This paper describes precedent analysis, partial explanation of a precedent (or rule) to isolate the new technique(s) it embodies, and rule reanalysis, which involves analyzing old rules in terms of new rules to obtain a more general set. The algorithms PA, PA-RR, and PA-RR-GW implement these ideas in the domains of digital circuit design and simplified gear design.
论文关键词:Learning by failing to explain, explanation-based learning, graph grammar, design
论文评审过程:
论文官网地址:https://doi.org/10.1023/A:1022685515549