AISE: Attending to Intent and Slots Explicitly for better spoken language understanding

作者:

Highlights:

摘要

Spoken language understanding (SLU) plays a central role in dialog systems and typically involves two tasks: intent detection and slot filling. Existing joint models improve the performance by introducing richer words, intents and slots semantic features. However, methods that model the explicit interactions between these features have not been further explored. In this paper, we propose a novel joint model based on the position-aware multi-head masked attention mechanism, which explicitly models the interaction between the word encoding feature and the intent–slot features, thereby generating the context features that contribute to slot filling. In addition, we adopt the multi-head attention mechanism to summarize the utterance-level semantic knowledge for intent detection. Experiments show that our model achieves state-of-the-art results and improves the sentence-level semantic frame accuracy, with 2.30% and 0.69% improvement relative to the previous best model on the SNIPS and ATIS datasets, respectively.

论文关键词:Spoken language understanding,Intent detection,Slot filling,Position-aware multi-head masked attention mechanism

论文评审过程:Received 16 April 2020, Revised 13 October 2020, Accepted 13 October 2020, Available online 19 October 2020, Version of Record 1 November 2020.

论文官网地址:https://doi.org/10.1016/j.knosys.2020.106537