Deterministic convergence of an online gradient method for neural networks
作者:
Highlights:
•
摘要
The online gradient method has been widely used as a learning algorithm for neural networks. We establish a deterministic convergence of online gradient methods for the training of a class of nonlinear feedforward neural networks when the training examples are linearly independent. We choose the learning rate η to be a constant during the training procedure. The monotonicity of the error function in the iteration is proved. A criterion for choosing the learning rate η is also provided to guarantee the convergence. Under certain conditions similar to those for the classical gradient methods, an optimal convergence rate for our online gradient methods is proved.
论文关键词:Online stochastic gradient method,Nonlinear feedforward Neural networks,Deterministic convergence,Monotonicity,Constant learning rate
论文评审过程:Received 17 January 2001, Revised 8 June 2001, Available online 31 October 2001.
论文官网地址:https://doi.org/10.1016/S0377-0427(01)00571-4