Embedding deep networks into visual explanations

作者:

摘要

In this paper, we propose a novel Explanation Neural Network (XNN) to explain the predictions made by a deep network. The XNN works by learning a nonlinear embedding of a high-dimensional activation vector of a deep network layer into a low-dimensional explanation space while retaining faithfulness i.e., the original deep learning predictions can be constructed from the few concepts extracted by our explanation network. We then visualize such concepts for human to learn about the high-level concepts that the deep network is using to make decisions. We propose an algorithm called Sparse Reconstruction Autoencoder (SRAE) for learning the embedding to the explanation space. SRAE aims to reconstruct part of the original feature space while retaining faithfulness. A pull-away term is applied to SRAE to make the bases of the explanation space more orthogonal to each other. A visualization system is then introduced for human understanding of the features in the explanation space. The proposed method is applied to explain CNN models in image classification tasks. We conducted a human study, which shows that the proposed approach outperforms single saliency map baselines, and improves human performance on a difficult classification task. Besides, several novel metrics are introduced to evaluate the performance of explanations quantitatively without human involvement.

论文关键词:Deep neural networks,Embedding,Visual explanations

论文评审过程:Received 1 April 2020, Revised 3 November 2020, Accepted 28 November 2020, Available online 2 December 2020, Version of Record 17 December 2020.

论文官网地址:https://doi.org/10.1016/j.artint.2020.103435