Nonlinear single layer neural network training algorithm for incremental, nonstationary and distributed learning scenarios

作者:

Highlights:

摘要

Incremental learning of neural networks has attracted much interest in recent years due to its wide applicability to large scale data sets and to distributed learning scenarios. Moreover, nonstationary learning paradigms have also emerged as a subarea of study in Machine Learning literature due to the problems of classical methods when dealing with data set shifts. In this paper we present an algorithm to train single layer neural networks with nonlinear output functions that take into account incremental, nonstationary and distributed learning scenarios. Moreover, it is demonstrated that introducing a regularization term into the proposed model is equivalent to choosing a particular initialization for the devised training algorithm, which may be suitable for real time systems that have to work under noisy conditions. In addition, the algorithm includes some previous models as special cases and can be used as a block component to build more complex models such as multilayer perceptrons, extending the capacity of these models to incremental, nonstationary and distributed learning paradigms. In this paper, the proposed algorithm is tested with standard data sets and compared with previous approaches, demonstrating its higher accuracy.

论文关键词:Artificial neural networks,Incremental learning,Nonstationary learning,Distributed learning

论文评审过程:Received 16 June 2011, Revised 2 April 2012, Accepted 11 May 2012, Available online 22 May 2012.

论文官网地址:https://doi.org/10.1016/j.patcog.2012.05.009