An efficient modified boosting method for solving classification problems

作者:

Highlights:

摘要

Based on the Adaboost algorithm, a modified boosting method is proposed in this paper for solving classification problems. This method predicts the class label of an example as the weighted majority voting of an ensemble of classifiers. Each classifier is obtained by applying a given weak learner to a subsample (with size smaller than that of the original training set) which is drawn from the original training set according to the probability distribution maintained over the training set. A parameter is introduced into the reweighted scheme proposed in Adaboost to update the probabilities assigned to training examples so that the algorithm can be more accurate than Adaboost. The experimental results on synthetic and several real-world data sets available from the UCI repository show that the proposed method improves the prediction accuracy, the execution speed as well as the robustness to classification noise of Adaboost. Furthermore, the diversity–accuracy patterns of the ensemble classifiers are investigated by kappa–error diagrams.

论文关键词:68T05,68T10,62H30,Ensemble classifier,Weak learner,Adaboost,Classification noise,Kappa–error diagram

论文评审过程:Received 6 October 2006, Revised 23 January 2007, Available online 12 March 2007.

论文官网地址:https://doi.org/10.1016/j.cam.2007.03.003