SLiKER: Sparse loss induced kernel ensemble regression
作者:
Highlights:
• We develop a novel regression method based on kernel trick and ensemble principle. Its merit is that multi-kernel selection and parameter decision can be conducted automatically through a pool of kernels.
• In our proposed method, we introduce sparsity to evaluate the quality of the model. With this sparsity model, well-behaved regressors are selected and the impacts of badly-behaved regressors are decreased.
• Experimental results on UCI regression and computer vision datasets indicate that compared to other regression ensemble methods, such as random forest and XGBoost, our method has the advantages of best performances in keeping lowest regression loss and highest classification accuracy.
摘要
•We develop a novel regression method based on kernel trick and ensemble principle. Its merit is that multi-kernel selection and parameter decision can be conducted automatically through a pool of kernels.•In our proposed method, we introduce sparsity to evaluate the quality of the model. With this sparsity model, well-behaved regressors are selected and the impacts of badly-behaved regressors are decreased.•Experimental results on UCI regression and computer vision datasets indicate that compared to other regression ensemble methods, such as random forest and XGBoost, our method has the advantages of best performances in keeping lowest regression loss and highest classification accuracy.
论文关键词:Multiple kernels,Ensemble regression,Sparse loss,Classification
论文评审过程:Received 6 January 2020, Revised 27 May 2020, Accepted 9 August 2020, Available online 13 August 2020, Version of Record 23 August 2020.
论文官网地址:https://doi.org/10.1016/j.patcog.2020.107587