A new method for sparsity control in support vector classification and regression

作者:

Highlights:

摘要

A new method of implementing Support Vector learning algorithms for classification and regression is presented which deals with problems of over-defined solutions and excessive complexity. Classification problems are solved with a minimum number of support vectors, irrespective of the degree of overlap in the training data. Support vector regression can deliver a sparse solution, without requiring Vapnik's ε-insensitive zone. This paper generalises sparsity control for both support vector classification and regression. The novelty in this work is in the method of achieving a sparse support vector set which forms a minimal basis for the prediction function.

论文关键词:Support vector machines,Regression,Sparse approximation,Structural risk minimisation,Neural network optimisation

论文评审过程:Received 8 July 1998, Revised 26 July 1999, Accepted 30 August 1999, Available online 7 June 2001.

论文官网地址:https://doi.org/10.1016/S0031-3203(99)00203-4