Single-Iteration Training Algorithm for Multi-Layer Feed-Forward Neural Networks

作者:J. Barhen, R. Cogswell, V. Protopopescu

摘要

A new methodology for neural learning is presented. Only a single iteration is needed to train a feed-forward network with near-optimal results. This is achieved by introducing a key modification to the conventional multi-layer architecture. A virtual input layer is implemented, which is connected to the nominal input layer by a special nonlinear transfer function, and to the first hidden layer by regular (linear) synapses. A sequence of alternating direction singular value decompositions is then used to determine precisely the inter-layer synaptic weights. This computational paradigm exploits the known separability of the linear (inter-layer propagation) and nonlinear (neuron activation) aspects of information transfer within a neural network. Examples show that the trained neural networks generalize well.

论文关键词:virtual input layer, neural network training, fast learning, SVD

论文评审过程:

论文官网地址:https://doi.org/10.1023/A:1009682730770