Most frugal explanations in Bayesian networks

作者:

摘要

Inferring the most probable explanation to a set of variables, given a partial observation of the remaining variables, is one of the canonical computational problems in Bayesian networks, with widespread applications in AI and beyond. This problem, known as MAP, is computationally intractable (NP-hard) and remains so even when only an approximate solution is sought. We propose a heuristic formulation of the MAP problem, denoted as Inference to the Most Frugal Explanation (MFE), based on the observation that many intermediate variables (that are neither observed nor to be explained) are irrelevant with respect to the outcome of the explanatory process. An explanation based on few samples (often even a singleton sample) from these irrelevant variables is typically almost as good as an explanation based on (the computationally costly) marginalization over these variables. We show that while MFE is computationally intractable in general (as is MAP), it can be tractably approximated under plausible situational constraints, and its inferences are fairly robust with respect to which intermediate variables are considered to be relevant.

论文关键词:Bayesian abduction,Parameterized complexity,Approximation,Heuristics,Computational complexity

论文评审过程:Received 1 October 2013, Revised 13 October 2014, Accepted 16 October 2014, Available online 22 October 2014.

论文官网地址:https://doi.org/10.1016/j.artint.2014.10.001