Levels of explainable artificial intelligence for human-aligned conversational explanations

作者:

Highlights:

• Provide insights into AI-Human communication.

• Define levels of explanation with identified techniques that align with AI cognitive processes.

• Discuss insights into Broad eXplainable Artificial Intelligence (Broad-XAI).

• Align AI explanation to human communication.

摘要

Over the last few years there has been rapid research growth into eXplainable Artificial Intelligence (XAI) and the closely aligned Interpretable Machine Learning (IML). Drivers for this growth include recent legislative changes and increased investments by industry and governments, along with increased concern from the general public. People are affected by autonomous decisions every day and the public need to understand the decision-making process to accept the outcomes. However, the vast majority of the applications of XAI/IML are focused on providing low-level ‘narrow’ explanations of how an individual decision was reached based on a particular datum. While important, these explanations rarely provide insights into an agent's: beliefs and motivations; hypotheses of other (human, animal or AI) agents' intentions; interpretation of external cultural expectations; or, processes used to generate its own explanation. Yet all of these factors, we propose, are essential to providing the explanatory depth that people require to accept and trust the AI's decision-making. This paper aims to define levels of explanation and describe how they can be integrated to create a human-aligned conversational explanation system. In so doing, this paper will survey current approaches and discuss the integration of different technologies to achieve these levels with Broad eXplainable Artificial Intelligence (Broad-XAI), and thereby move towards high-level ‘strong’ explanations.

论文关键词:Explainable Artificial Intelligence (XAI),Broad-XAI,Interpretable Machine Learning (IML),Artificial General Intelligence (AGI),Human-Computer Interaction (HCI)

论文评审过程:Received 28 February 2020, Revised 18 November 2020, Accepted 30 April 2021, Available online 12 May 2021, Version of Record 13 May 2021.

论文官网地址:https://doi.org/10.1016/j.artint.2021.103525