On the importance of sluggish state memory for learning long term dependency

作者:

Highlights:

摘要

The vanishing gradients problem inherent in Simple Recurrent Networks (SRN) trained with back-propagation, has led to a significant shift towards the use of Long Short-Term Memory (LSTM) and Echo State Networks (ESN), which overcome this problem through either second order error-carousel schemes or different learning algorithms, respectively.This paper re-opens the case for SRN-based approaches, by considering a variant, the Multi-recurrent Network (MRN). We show that memory units embedded within its architecture can ameliorate against the vanishing gradient problem, by providing variable sensitivity to recent and more historic information through layer- and self-recurrent links with varied weights, to form a so-called sluggish state-based memory.We demonstrate that an MRN, optimised with noise injection, is able to learn the long term dependency within a complex grammar induction task, significantly outperforming the SRN, NARX and ESN. Analysis of the internal representations of the networks, reveals that sluggish state-based representations of the MRN are best able to latch on to critical temporal dependencies spanning variable time delays, to maintain distinct and stable representations of all underlying grammar states. Surprisingly, the ESN was unable to fully learn the dependency problem, suggesting the major shift towards this class of models may be premature.

论文关键词:Simple Recurrent Networks,Vanishing gradient problem,Echo State Network,Grammar prediction task,Sluggish state space,Internal representation

论文评审过程:Received 7 May 2015, Revised 27 October 2015, Accepted 27 December 2015, Available online 5 January 2016, Version of Record 4 February 2016.

论文官网地址:https://doi.org/10.1016/j.knosys.2015.12.024