Encoding features robust to unseen modes of variation with attentive long short-term memory
作者:
Highlights:
• We devise a novel LSTM adaptation called the attentive mode variational LSTM.
• Attention mechanism is proposed to separate the input signal into task-relevant, and task-irrelevant feature sequences.
• The proposed method encodes the mode of variation in the input sequence at test time using the task-irrelevant feature sequence.
• The proposed method encodes features robust to variation unseen during the training.
• The proposed attentive mode variational LSTM can be generalized to different applications.
摘要
•We devise a novel LSTM adaptation called the attentive mode variational LSTM.•Attention mechanism is proposed to separate the input signal into task-relevant, and task-irrelevant feature sequences.•The proposed method encodes the mode of variation in the input sequence at test time using the task-irrelevant feature sequence.•The proposed method encodes features robust to variation unseen during the training.•The proposed attentive mode variational LSTM can be generalized to different applications.
论文关键词:Long short-term memory,Recurrent neural networks,Attention,Robust features,Modes of variation,Facial expression recognition,Human action recognition
论文评审过程:Received 25 January 2019, Revised 10 December 2019, Accepted 11 December 2019, Available online 18 December 2019, Version of Record 2 January 2020.
论文官网地址:https://doi.org/10.1016/j.patcog.2019.107159