gDART: Improving rumor verification in social media with Discrete Attention Representations

作者:

Highlights:

摘要

Due to the harmful impact of fabricated information on social media, many rumor verification techniques have been introduced in recent years. Advanced techniques like multi-task learning (MTL), shared-private models suffer from many strategic limitations that restrict their capability of veracity identification on social media. These models are often reliant on multiple tasks for the primary targeted objective. Even the most recent deep neural network (DNN) models like VRoC, Hierarchical-PSV, StA-HiTPLAN etc. based on VAE, GCN, Transformer respectively with improved modification are able to perform good on veracity identification task but with the help of additional auxiliary information, mostly. However, their rise is still not substantial with respect to the proposed model even though the proposed model is not using any additional information. To come up with an improved DNN model architecture, we introduce globally Discrete Attention Representations from Transformers (gDART). Discrete-Attention mechanism in gDART is capable of capturing multifarious correlations veiled among the sequence of words which existing DNN models including Transformer often overlook. Our proposed framework uses a Branch-CoRR Attention Network to extract highly informative features in branches, and employs Feature Fusion Network Component to identify deep embedded features and use them to make enhanced identification of veracity of an unverified claim. Moreover, to achieve its goal, gDART is not dependent on any costly auxiliary resource but on an unsupervised learning process. Extensive experiments reveal that gDART marks a considerable performance gain in veracity identification task over state-of-the-art models on two real world rumor datasets. gDART reports a gain of 36.76%, 40.85% on standard benchmark metrics.

论文关键词:Rumor verification,Transformer,Branch-CoRR network,Discrete-Attention,Unsupervised loss,Correlations

论文评审过程:Received 18 October 2021, Revised 1 March 2022, Accepted 6 March 2022, Available online 28 March 2022, Version of Record 28 March 2022.

论文官网地址:https://doi.org/10.1016/j.ipm.2022.102927