Generative adversarial networks based on Wasserstein distance for knowledge graph embeddings
作者:
Highlights:
•
摘要
Knowledge graph embedding aims to project entities and relations into low-dimensional and continuous semantic feature spaces, which has captured more attention in recent years. Most of the existing models roughly construct negative samples via a uniformly random mode, by which these corrupted samples are practically trivial for training the embedding model. Inspired by generative adversarial networks (GANs), the generator can be employed to sample more plausible negative triplets, that boosts the discriminator to improve its embedding performance further. However, vanishing gradient on discrete data is an inherent problem in traditional GANs. In this paper, we propose a generative adversarial network based knowledge graph representation learning model by introducing the Wasserstein distance to replace traditional divergence for settling this issue. Moreover, the additional weak supervision information is also absorbed to refine the performance of embedding model since these textual information contains detailed semantic description and offers abundant semantic relevance. In the experiments, we evaluate our method on the tasks of link prediction and triplet classification. The experimental results indicate that the Wasserstein distance is capable of solving the problem of vanishing gradient on discrete data and accelerating the convergence, additional weak supervision information also can significantly improve the performance of the model.
论文关键词:Knowledge graph embedding,Generative adversarial networks,Wasserstein distance,Weak supervision information
论文评审过程:Received 14 May 2019, Revised 25 October 2019, Accepted 29 October 2019, Available online 31 October 2019, Version of Record 7 February 2020.
论文官网地址:https://doi.org/10.1016/j.knosys.2019.105165