Position-aware self-attention based neural sequence labeling

作者:

Highlights:

• This paper identifies the problem of modeling discrete context dependencies in sequence labeling tasks.

• This paper develops a well-designed self-attentional context fusion network to provide complementary context information on the basis of Bi-LSTM.

• This paper proposes a novel position-aware self-attention to incorporate three different positional factors for exploring the relative position information among token.

• The proposed model achieves state-of-the-arts performance on part-of-speech (POS) tagging, named entity recognition (NER) and phrase chunking tasks.

摘要

•This paper identifies the problem of modeling discrete context dependencies in sequence labeling tasks.•This paper develops a well-designed self-attentional context fusion network to provide complementary context information on the basis of Bi-LSTM.•This paper proposes a novel position-aware self-attention to incorporate three different positional factors for exploring the relative position information among token.•The proposed model achieves state-of-the-arts performance on part-of-speech (POS) tagging, named entity recognition (NER) and phrase chunking tasks.

论文关键词:Equence labeling,Self-attention,Discrete context dependency

论文评审过程:Received 24 January 2020, Revised 30 April 2020, Accepted 6 September 2020, Available online 7 September 2020, Version of Record 10 September 2020.

论文官网地址:https://doi.org/10.1016/j.patcog.2020.107636