Multi-granularity visual explanations for CNN
作者:
Highlights:
•
摘要
The interpretability of convolutional neural networks (CNNs) is attracting increasing attention. Class activation maps (CAM) intuitively explain the classification mechanisms of CNNs by highlighting important areas. However, as coarse-grained explanations, classical CAM methods are incapable of explaining the classification mechanism in detail. Inspired by the concept of granular computing theory, we propose a new CAM method to divide the highlighted areas into commonality saliency maps and specificity saliency maps for multi-granularity visualization. This method consists of three components. First, the universe is simplified to contain only a category pair. Then, neighborhood rough sets are used to divide the universe into three disjointed regions containing the commonality and specificity of category pairs by adaptive thresholds of optimal granularity. Finally, these three regions are used to generate multi-granularity saliency maps. This method well visualizes the multi-granularity classification mechanism of the CNN and further explains the misclassification. We compare this method to five representative CAM methods using two newly proposed fine-grained evaluation metrics and subjective observations. First, experiments demonstrate that the multi-granularity visualization method provides a more extensive and detailed explanation. Second, the adaptive thresholds can be adapted to different situations to obtain a reliable visualization explanation. Finally, in explaining of the adversarial attack, it visualizes the details that caused misclassification.
论文关键词:00-01,99-00,Interpretability,Visual explanations,Class activation mapping (CAM),Neighborhood rough set,Multi-granularity
论文评审过程:Received 25 February 2022, Revised 13 July 2022, Accepted 14 July 2022, Available online 25 July 2022, Version of Record 8 August 2022.
论文官网地址:https://doi.org/10.1016/j.knosys.2022.109474