Explaining prediction models and individual predictions with feature contributions

作者:Erik Štrumbelj, Igor Kononenko

摘要

We present a sensitivity analysis-based method for explaining prediction models that can be applied to any type of classification or regression model. Its advantage over existing general methods is that all subsets of input features are perturbed, so interactions and redundancies between features are taken into account. Furthermore, when explaining an additive model, the method is equivalent to commonly used additive model-specific methods. We illustrate the method’s usefulness with examples from artificial and real-world data sets and an empirical analysis of running times. Results from a controlled experiment with 122 participants suggest that the method’s explanations improved the participants’ understanding of the model.

论文关键词:Knowledge discovery, Data mining, Visualization , Interpretability, Decision support

论文评审过程:

论文官网地址:https://doi.org/10.1007/s10115-013-0679-x