The quest of parsimonious XAI: A human-agent architecture for explanation formulation
作者:
摘要
With the widespread use of Artificial Intelligence (AI), understanding the behavior of intelligent agents and robots is crucial to guarantee successful human-agent collaboration since it is not straightforward for humans to understand an agent's state of mind. Recent empirical studies have confirmed that explaining a system's behavior to human users fosters the latter's acceptance of the system. However, providing overwhelming or unnecessary information may also confuse the users and cause failure. For these reasons, parsimony has been outlined as one of the key features allowing successful human-agent interaction with parsimonious explanation defined as the simplest explanation (i.e. least complex) that describes the situation adequately (i.e. descriptive adequacy). While parsimony is receiving growing attention in the literature, most of the works are carried out on the conceptual front. This paper proposes a mechanism for parsimonious eXplainable AI (XAI). In particular, it introduces the process of explanation formulation and proposes HAExA, a human-agent explainability architecture allowing to make it operational for remote robots. To provide parsimonious explanations, HAExA relies on both contrastive explanations and explanation filtering. To evaluate the proposed architecture, several research hypotheses are investigated in an empirical user study that relies on well-established XAI metrics to estimate how trustworthy and satisfactory the explanations provided by HAExA are. The results are analyzed using parametric and non-parametric statistical testing.
论文关键词:Explainable artificial intelligence,Human-computer interaction,Multi-agent systems,Empirical user studies,Statistical testing
论文评审过程:Received 2 May 2020, Revised 29 July 2021, Accepted 2 August 2021, Available online 8 August 2021, Version of Record 1 September 2021.
论文官网地址:https://doi.org/10.1016/j.artint.2021.103573