Adaptive cross-contextual word embedding for word polysemy with unsupervised topic modeling
作者:
Highlights:
•
摘要
Because of its efficiency, word embedding has been widely used in many natural language processing and text modeling tasks. It aims to represent each word by a vector so such that the geometry between these vectors can capture the semantic correlations between words. An ambiguous word can often have diverse meanings in different contexts, a quality which is called polysemy. The bulk of studies aimed to generate only one single embedding for each word, whereas a few studies have made a small number of embeddings to present different meanings of each word. However, it is hard to determine the exact number of senses for each word, as meanings depend on contexts. To address this problem, this paper proposes a novel adaptive cross-contextual word embedding (ACWE) method for capturing the word polysemy in different contexts based on topic modeling, in which the word polysemy is defined over a latent interpretable semantic space. The proposed ACWE consists of two main parts, in the first of which an unsupervised cross-contextual probabilistic word embedding model is designed to obtain the global word embeddings, and each word is represented by an embedding in the unified latent semantic space. Based on the global word embeddings, an adaptive cross-contextual word embedding process is then devised in the second part to learn the local embeddings for each polysemous word in different contexts. In fact, a word embedding is adaptively adjusted and updated with respect to different contexts to generate different word embeddings tailored to the corresponding contexts. The proposed ACWE is validated on two datasets collected from Wikipedia and IMDb on different tasks including word similarity, polysemy induction, semantic interpretability, and text classification. Experimental results indicate that ACWE does not only outperform the established word embedding methods, which consider word polysemy on six popular benchmark datasets, but it also yields competitive performance compared with state-of-the-art deep learning-based approaches without considering polysemy. Moreover, the proposed ACWE significantly improves the performances of text classification both in precision and F1, and the visualizations of the semantics of words demonstrate the feasibility and advantage of the proposed ACWE model on polysemy.
论文关键词:Word polysemy,Representation learning,Adaptive word embeddings,Tailored word embedding,Topic modeling,Semantic learning
论文评审过程:Received 14 March 2020, Revised 31 January 2021, Accepted 2 February 2021, Available online 19 February 2021, Version of Record 25 February 2021.
论文官网地址:https://doi.org/10.1016/j.knosys.2021.106827