Believe the HiPe: Hierarchical perturbation for fast, robust, and model-agnostic saliency mapping
作者:
Highlights:
• Explainable AI (XAI) is increasingly necessary for AI safety as we build complex models in high-domains and deploy them widely.
• Saliency mapping is a popular explanation/attribution XAI technique for deep learning.
• Existing model-agnostic saliency mapping approaches are prohibitively slow.
• Hierarchical Perturbation (HiPe) is a new model-agnostic method which generates heatmaps of comparable or superior quality to the state-of-the-art.
• And is 20× faster than existing model-agnostic saliency methods.
摘要
•Explainable AI (XAI) is increasingly necessary for AI safety as we build complex models in high-domains and deploy them widely.•Saliency mapping is a popular explanation/attribution XAI technique for deep learning.•Existing model-agnostic saliency mapping approaches are prohibitively slow.•Hierarchical Perturbation (HiPe) is a new model-agnostic method which generates heatmaps of comparable or superior quality to the state-of-the-art.•And is 20× faster than existing model-agnostic saliency methods.
论文关键词:XAI,AI safety,Saliency mapping,Deep learning explanation,Interpretability,Prediction attribution
论文评审过程:Received 23 February 2021, Revised 11 April 2022, Accepted 24 April 2022, Available online 26 April 2022, Version of Record 2 May 2022.
论文官网地址:https://doi.org/10.1016/j.patcog.2022.108743