Hierarchical mixing linear support vector machines for nonlinear classification

作者:

Highlights:

• H-MLSVMs does not require a large memory to store kernel values in the training process, because H-MLSVMs is composed of LSVMs, and there are very efficient algorithms for training LSVMs [48], [49].

• The hierarchical structure makes H-MLSVMs predict the labels of new arrived samples via a few of LSVMs, and it is much faster than nonlinear SVMs classifiers.

• We quantify the generalization error bound for the class of LLSVMs based on the Rademacher complexity, and the stop criterion based on minimizing this bound ensures that H-MLSVMs can effectively avoid overfitting and have a good classification performance.

摘要

Highlights•H-MLSVMs does not require a large memory to store kernel values in the training process, because H-MLSVMs is composed of LSVMs, and there are very efficient algorithms for training LSVMs [48], [49].•The hierarchical structure makes H-MLSVMs predict the labels of new arrived samples via a few of LSVMs, and it is much faster than nonlinear SVMs classifiers.•We quantify the generalization error bound for the class of LLSVMs based on the Rademacher complexity, and the stop criterion based on minimizing this bound ensures that H-MLSVMs can effectively avoid overfitting and have a good classification performance.

论文关键词:Support vector machine,Classification,Hierarchical structure

论文评审过程:Received 30 July 2015, Revised 20 February 2016, Accepted 22 February 2016, Available online 15 March 2016, Version of Record 23 August 2016.

论文官网地址:https://doi.org/10.1016/j.patcog.2016.02.018