Detecting unusual input to neural networks

作者:Jörg Martin, Clemens Elster

摘要

Evaluating a neural network on an input that differs markedly from the training data might cause erratic and flawed predictions. We study a method that judges the unusualness of an input by evaluating its informative content compared to the learned parameters. This technique can be used to judge whether a network is suitable for processing a certain input and to raise a red flag that unexpected behavior might lie ahead. We compare our approach to various methods for uncertainty evaluation from the literature for various datasets and scenarios. Specifically, we introduce a simple, effective method that allows to directly compare the output of such metrics for single input points even if these metrics live on different scales.

论文关键词:Deep learning, Trustworthiness, Fisher information, Uncertainty, Out-of-distribution

论文评审过程:

论文官网地址:https://doi.org/10.1007/s10489-020-01925-8