Analysis of decision boundaries in linearly combined neural classifiers
作者:
Highlights:
•
摘要
Combining or integrating the outputs of several pattern classifiers has led to improved performance in a multitude of applications. This paper provides an analytical framework to quantify the improvements in classification results due to combining. We show that combining networks linearly in output space reduces the variance of the actual decision region boundaries around the optimum boundary. This result is valid under the assumption that the a posteriori probability distributions for each class are locally monotonic around the Bayes optimum boundary. In the absence of classifier bias, the error is shown to be proportional to the boundary variance, resulting in a simple expression for error rate improvements. In the presence of bias, the error reduction, expressed in terms of a bias reduction factor, is shown to be less than or equal to the reduction obtained in the absence of bias. The analysis presented here facilitates the understanding of the relationships among error rates, classifier boundary distributions and combining in output space.
论文关键词:Combining,Decision boundary,Neural networks,Pattern classification,Hybrid networks,Variance reduction
论文评审过程:Received 4 October 1994, Revised 16 May 1995, Accepted 2 June 1995, Available online 7 June 2001.
论文官网地址:https://doi.org/10.1016/0031-3203(95)00085-2