Modulating scalable Gaussian processes for expressive statistical learning

作者:

Highlights:

• New scalable Gaussian process (GP) paradigms to introduce additional modulation variables for learning rich statistical representation, e.g., het- eroscedastic noise, multi-modality and non-stationarity, from massive data.

• Different variational inference strategies to arrive at analytical or tight evidence lower bounds (ELBOs) for effcient and effective model training.

• Comprehensive comparison against state-of-the-art GP and neural net- work counterparts to showcase the superiority of scalable modulated GPs.

摘要

•New scalable Gaussian process (GP) paradigms to introduce additional modulation variables for learning rich statistical representation, e.g., het- eroscedastic noise, multi-modality and non-stationarity, from massive data.•Different variational inference strategies to arrive at analytical or tight evidence lower bounds (ELBOs) for effcient and effective model training.•Comprehensive comparison against state-of-the-art GP and neural net- work counterparts to showcase the superiority of scalable modulated GPs.

论文关键词:Gaussian process,Modulation,Scalability,Heteroscedastic noise,Multi-modality,Non-stationarity

论文评审过程:Received 29 August 2020, Revised 15 June 2021, Accepted 19 June 2021, Available online 2 July 2021, Version of Record 2 July 2021.

论文官网地址:https://doi.org/10.1016/j.patcog.2021.108121