Learning representations of multivariate time series with missing data

作者:

Highlights:

• We design a recurrent autoencoder architecture to compress multivariate time series with missing data.

• An additional regularization term aligns the learned representations with a prior kernel, which accounts for missing data.

• Even with many missing data, time series belonging to different classes become well separated in the induced latent space.

• We exploit the proposed architecture to design methods for anomaly detection and for imputing missing data.

• We perform an analysis to investigate which kind of time series can be effectively encoded using recurrent layers.

摘要

•We design a recurrent autoencoder architecture to compress multivariate time series with missing data.•An additional regularization term aligns the learned representations with a prior kernel, which accounts for missing data.•Even with many missing data, time series belonging to different classes become well separated in the induced latent space.•We exploit the proposed architecture to design methods for anomaly detection and for imputing missing data.•We perform an analysis to investigate which kind of time series can be effectively encoded using recurrent layers.

论文关键词:Representation learning,Multivariate time series,Autoencoders,Recurrent neural networks,Kernel methods

论文评审过程:Received 29 August 2018, Revised 30 May 2019, Accepted 15 July 2019, Available online 19 July 2019, Version of Record 22 July 2019.

论文官网地址:https://doi.org/10.1016/j.patcog.2019.106973