When is the Naive Bayes approximation not so naive?
作者:Christopher R. Stephens, Hugo Flores Huerta, Ana Ruíz Linares
摘要
The Naive Bayes approximation (NBA) and associated classifier are widely used and offer robust performance across a large spectrum of problem domains. As it depends on a very strong assumption—independence among features—this has been somewhat puzzling. Various hypotheses have been put forward to explain its success and many generalizations have been proposed. In this paper we propose a set of “local” error measures—associated with the likelihood functions for subsets of attributes and for each class—and show explicitly how these local errors combine to give a “global” error associated to the full attribute set. By so doing we formulate a framework within which the phenomenon of error cancelation, or augmentation, can be quantified and its impact on classifier performance estimated and predicted a priori. These diagnostics allow us to develop a deeper and more quantitative understanding of why the NBA is so robust and under what circumstances one expects it to break down. We show how these diagnostics can be used to select which features to combine and use them in a simple generalization of the NBA, applying the resulting classifier to a set of real world data sets.
论文关键词:Classification, Naive Bayes approximation, Generalized Bayes approximation, Performance prediction, Error analysis
论文评审过程:
论文官网地址:https://doi.org/10.1007/s10994-017-5658-0