Generalization Error of Combined Classifiers

作者:

Highlights:

摘要

We derive an upper bound on the generalization error of classifiers which can be represented as thresholded convex combinations of thresholded convex combinations of functions. Such classifiers include single hidden-layer threshold networks and voted combinations of decision trees (such as those produced by boosting algorithms). The derived bound depends on the proportion of training examples with margin less than some threshold and the average complexity of the combined functions (where the average is over the weights assigned to each function in the convex combination). The complexity of the individual functions in the combination depends on their closeness to threshold. By representing a decision tree as a thresholded convex combination of weighted leaf functions, we apply this result to bound the generalization error of combinations of decision trees. Previous bounds depend on the margin of the combined classifier and the average complexity of the decision trees in the combination, where the complexity of each decision tree depends on the total number of leaves. Our bound also depends on the margin of the combined classifier and the average complexity of the decision trees, but our measure of complexity for an individual decision tree is based on the distribution of training examples over leaves and can be significantly smaller than the total number of leaves.

论文关键词:voting methods,margins analysis,decision trees,neural networks,boosting.

论文评审过程:Received 8 October 1997, Revised 2 April 2002, Available online 8 November 2002.

论文官网地址:https://doi.org/10.1006/jcss.2002.1854