Approximating the Semantics of Logic Programs by Recurrent Neural Networks

作者:Steffen Hölldobler, Yvonne Kalinke, Hans-Peter Störr

摘要

In [1] we have shown how to construct a 3-layered recurrent neural network that computes the fixed point of the meaning function TP of a given propositional logic program P, which corresponds to the computation of the semantics of P. In this article we consider the first order case. We define a notion of approximation for interpretations and prove that there exists a 3-layered feed forward neural network that approximates the calculation of TP for a given first order acyclic logic program P with an injective level mapping arbitrarily well. Extending the feed forward network by recurrent connections we obtain a recurrent neural network whose iteration approximates the fixed point of TP. This result is proven by taking advantage of the fact that for acyclic logic programs the function TP is a contraction mapping on a complete metric space defined by the interpretations of the program. Mapping this space to the metric space R with Euclidean distance, a real valued function fP can be defined which corresponds to TP and is continuous as well as a contraction. Consequently it can be approximated by an appropriately chosen class of feed forward neural networks.

论文关键词:recurrent neural networks, logic programs, model generation, approximations

论文评审过程:

论文官网地址:https://doi.org/10.1023/A:1008376514077