Inclusion of domain-knowledge into GNNs using mode-directed inverse entailment

作者:Tirtharaj Dash, Ashwin Srinivasan, A. Baskar

摘要

We present a general technique for constructing Graph Neural Networks (GNNs) capable of using multi-relational domain knowledge. The technique is based on mode-directed inverse entailment (MDIE) developed in Inductive Logic Programming (ILP). Given a data instance e and background knowledge B, MDIE identifies a most-specific logical formula \(\bot _B(e)\) that contains all the relational information in B that is related to e. We represent \(\bot _B(e)\) by a “bottom-graph” that can be converted into a form suitable for GNN implementations. This transformation allows a principled way of incorporating generic background knowledge into GNNs: we use the term ‘BotGNN’ for this form of graph neural networks. For several GNN variants, using real-world datasets with substantial background knowledge, we show that BotGNNs perform significantly better than both GNNs without background knowledge and a recently proposed simplified technique for including domain knowledge into GNNs. We also provide experimental evidence comparing BotGNNs favourably to multi-layer perceptrons that use features representing a “propositionalised” form of the background knowledge; and BotGNNs to a standard ILP based on the use of most-specific clauses. Taken together, these results point to BotGNNs as capable of combining the computational efficacy of GNNs with the representational versatility of ILP.

论文关键词:Neuro-symbolic learning, Inductive logic programming, Mode-directed inverse entailment, Graph neural networks, Background knowledge

论文评审过程:

论文官网地址:https://doi.org/10.1007/s10994-021-06090-8