Cognitive structure learning model for hierarchical multi-label text classification
作者:
Highlights:
•
摘要
The human mind grows in learning new knowledge, which finally organizes and develops a basic mental pattern called cognitive structure. Hierarchical multi-label text classification (HMLTC), a fundamental but challenging task in many real-world applications, aims to classify the documents with hierarchical labels to form a resembling cognitive structure learning process. Existing approaches for HMLTC mainly focus on partial new knowledge learning or the global cognitive-structure-like label structure utilization in a cognitive view. However, the complete cognitive structure learning model is a unity that is indispensably constructed by the global label structure utilization and partial knowledge learning, which is ignored among those HMLTC approaches. To address this problem, we will imitate the cognitive structure learning process into the HMLTC learning and propose a unified framework called Hierarchical Cognitive Structure Learning Model (HCSM) in this paper. HCSM is composed of the Attentional Ordered Recurrent Neural Network (AORNN) submodule and Hierarchical Bi-Directional Capsule (HBiCaps) submodule. Both submodules utilize the partial new knowledge and global hierarchical label structure comprehensively for the HMLTC task. On the one hand, AORNN extracts the semantic vector as partial new knowledge from the original text by the word-level and hierarchy-level embedding granularities. On the other hand, AORNN builds the hierarchical text representation learning corresponding to the global label structure by the document-level neurons ordering. HBiCaps employs an iteration to form a unified label categorization process similar to cognitive-structure learning: firstly, using the probability computation of local hierarchical relationships to maintain partial knowledge learning; secondly, modifying the global hierarchical label structure based on the dynamic routing mechanism between capsules. Moreover, the experimental results on four benchmark datasets demonstrate that HCSM outperforms or matches state-of-the-art text classification methods.
论文关键词:Hierarchical multi-label text classification,Cognitive structure,Capsule neural network,Text representation
论文评审过程:Received 22 August 2020, Revised 10 January 2021, Accepted 16 February 2021, Available online 18 February 2021, Version of Record 25 February 2021.
论文官网地址:https://doi.org/10.1016/j.knosys.2021.106876