A new hybrid semi-supervised algorithm for text classification with class-based semantics
作者:
Highlights:
•
摘要
Vector Space Models (VSM) are commonly used in language processing to represent certain aspects of natural language semantics. Semantics of VSM comes from the distributional hypothesis, which states that words that occur in similar contexts usually have similar meanings. In our previous work, we proposed novel semantic smoothing kernels based on classspecific transformations. These kernels use class-term matrices, which can be considered as a new type of VSM. By using the class as the context, these methods can extract class specific semantics by making use of word distributions both in documents and in different classes. In this study, we adapt two of these semantic classification approaches to build a novel and high performance semi-supervised text classification algorithm. These approaches include Helmholtz principle based calculation of term meanings in the context of classes for initial classification and a supervised term weighting based semantic kernel with Support Vector Machines (SVM) for the final classification model. The approach used in the first phase is especially good at learning with very small datasets, while the approach in the second phase is specifically good at eliminating noise in a relatively large and noisy training sets when building a classification model. Overall, as a semantic semi-supervised learning algorithm, our approach can effectively utilize abundant source of unlabeled instances to improve the classification accuracy significantly especially when the amount of labeled instances are limited.
论文关键词:Semantics,Semi-supervised classification,Text classification,Semantic smoothing kernel,Class-based transformations
论文评审过程:Received 15 November 2015, Revised 13 June 2016, Accepted 14 June 2016, Available online 15 June 2016, Version of Record 12 August 2016.
论文官网地址:https://doi.org/10.1016/j.knosys.2016.06.021