Learning rates of gradient descent algorithm for classification

作者:

Highlights:

摘要

In this paper, a stochastic gradient descent algorithm is proposed for the binary classification problems based on general convex loss functions. It has computational superiority over the existing algorithms when the sample size is large. Under some reasonable assumptions on the hypothesis space and the underlying distribution, the learning rate of the algorithm has been established, which is faster than that of closely related algorithms.

论文关键词:68T05,62J02,Classification algorithm,Stochastic gradient descent,Reproducing kernel Hilbert space,Learning rates,computational complexity

论文评审过程:Received 20 December 2007, Revised 13 April 2008, Available online 20 April 2008.

论文官网地址:https://doi.org/10.1016/j.cam.2008.04.022