Spatial relation learning for explainable image classification and annotation in critical applications

作者:

摘要

With the recent successes of black-box models in Artificial Intelligence (AI) and the growing interactions between humans and AIs, explainability issues have risen. In this article, in the context of high-stake applications, we propose an approach for explainable classification and annotation of images. It is based on a transparent model, whose reasoning is accessible and human understandable, and on interpretable fuzzy relations that enable to express the vagueness of natural language. The knowledge about relations is set beforehand by an expert and thus training instances do not need to be annotated. The most relevant relations are extracted using a fuzzy frequent itemset mining algorithm in order to build rules, for classification, and constraints, for annotation. We also present two heuristics that make the process of evaluating relations faster. Since the strengths of our approach are the transparency of the model and the interpretability of the relations, an explanation in natural language can be generated. Supported by experimental results, we show that, given a segmentation of the input, our approach is able to successfully perform the target task and generate explanations that were judged as consistent and convincing by a set of participants.

论文关键词:Explainable artificial intelligence,Relation learning,Fuzzy logic

论文评审过程:Received 30 April 2020, Revised 28 October 2020, Accepted 28 November 2020, Available online 2 December 2020, Version of Record 11 December 2020.

论文官网地址:https://doi.org/10.1016/j.artint.2020.103434