A sparsity driven kernel machine based on minimizing a generalization error bound
作者:
Highlights:
•
摘要
A new sparsity driven kernel classifier is presented based on the minimization of a recently derived data-dependent generalization error bound. The objective function consists of the usual hinge loss function penalizing training errors and a concave penalty function of the expansion coefficients. The problem of minimizing the non-convex bound is addressed by a successive linearization approach, whereby the problem is transformed into a sequence of linear programs. The algorithm produced comparable error rates to the standard support vector machine but significantly reduced the number of support vectors and the concomitant classification time.
论文关键词:Sparsity,Classification,Generalization error bounds,Statistical learning theory
论文评审过程:Received 7 June 2008, Revised 3 January 2009, Accepted 2 March 2009, Available online 13 March 2009.
论文官网地址:https://doi.org/10.1016/j.patcog.2009.03.006