Defaults and relevance in model-based reasoning
作者:
摘要
Reasoning with model-based representations is an intuitive paradigm, which has been shown to be theoretically sound and to possess some computational advantages over reasoning with formula-based representations of knowledge. This paper studies these representations and further substantiates the claim regarding their advantages. In particular, model-based representations are shown to efficiently support reasoning in the presence of varying context information, handle efficiently fragments of Reiter's default logic and provide a useful way to integrate learning with reasoning. Furthermore, these results are closely related to the notion of relevance. The use of relevance information is best exemplified by the filtering process involved in the algorithm developed for reasoning within context. The relation of defaults to relevance is viewed through the notion of context, where the agent has to find plausible context information by using default rules. This view yields efficient algorithms for default reasoning. Finally, it is argued that these results support an incremental view of reasoning in a natural way, and the notion of relevance to the environment, captured by the Learning to Reason framework, is discussed.
论文关键词:Knowledge representation,Common-sense reasoning,Learning to reason,Reasoning with models,Context,Default reasoning
论文评审过程:Available online 19 May 1998.
论文官网地址:https://doi.org/10.1016/S0004-3702(97)00044-1