A new convex objective function for the supervised learning of single-layer neural networks
作者:
Highlights:
•
摘要
This paper proposes a novel supervised learning method for single-layer feedforward neural networks. This approach uses an alternative objective function to that based on the MSE, which measures the errors before the neuron's nonlinear activation functions instead of after them. In this case, the solution can be easily obtained solving systems of linear equations, i.e., requiring much less computational power than the one associated with the regular methods. A theoretical study is included to proof the approximated equivalence between the global optimum of the objective function based on the regular MSE criterion and the one of the proposed alternative MSE function.Furthermore, it is shown that the presented method has the capability of allowing incremental and distributed learning. An exhaustive experimental study is also presented to verify the soundness and efficiency of the method. This study contains 10 classification and 16 regression problems. In addition, a comparison with other high performance learning algorithms shows that the proposed method exhibits, in average, the highest performance and low-demanding computational requirements.
论文关键词:Single-layer neural networks,Global optimum,Supervised learning method,Least squares,Convex optimization,Incremental learning
论文评审过程:Received 8 June 2009, Revised 27 October 2009, Accepted 26 November 2009, Available online 4 December 2009.
论文官网地址:https://doi.org/10.1016/j.patcog.2009.11.024