On the generalization error of fixed combinations of classifiers

作者:

Highlights:

摘要

We consider the generalization error of concept learning when using a fixed Boolean function of the outputs of a number of different classifiers. Here, we take into account the ‘margins’ of each of the constituent classifiers. A special case is that in which the constituent classifiers are linear threshold functions (or perceptrons) and the fixed Boolean function is the majority function. This corresponds to a ‘committee of perceptrons,’ an artificial neural network (or circuit) consisting of a single layer of perceptrons (or linear threshold units) in which the output of the network is defined to be the majority output of the perceptrons. Recent work of Auer et al. studied the computational properties of such networks (where they were called ‘parallel perceptrons’), proposed an incremental learning algorithm for them, and demonstrated empirically that the learning rule is effective. As a corollary of the results presented here, generalization error bounds are derived for this special case that provide further motivation for the use of this learning rule.

论文关键词:Computational learning,Complexity of learning,Generalization error,Large margins

论文评审过程:Received 23 October 2005, Revised 20 October 2006, Available online 8 December 2006.

论文官网地址:https://doi.org/10.1016/j.jcss.2006.10.017