Convergence of an online gradient method for feedforward neural networks with stochastic inputs
作者:
Highlights:
•
摘要
In this paper, we study the convergence of an online gradient method for feed-forward neural networks. The input training examples are permuted stochastically in each cycle of iteration. A monotonicity and a weak convergence of deterministic nature are proved.
论文关键词:68T01,Feedforward neural networks,Online gradient method,Convergence,Stochastic inputs
论文评审过程:Received 20 August 2002, Revised 25 February 2003, Available online 1 December 2003.
论文官网地址:https://doi.org/10.1016/j.cam.2003.08.062