Demystifying XAI: Requirements for Understandable XAI Explanations.

Journal: Studies in health technology and informatics
Published Date:

Abstract

This paper establishes requirements for assessing the usability of Explainable Artificial Intelligence (XAI) methods, focusing on non-AI experts like healthcare professionals. Through a synthesis of literature and empirical findings, it emphasizes achieving optimal cognitive load, task performance, and task time in XAI explanations. Key components include tailoring explanations to user expertise, integrating domain knowledge, and using non-propositional representations for comprehension. The paper highlights the critical role of relevance, accuracy, and truthfulness in fostering user trust. Practical guidelines are provided for designing transparent and user-friendly XAI explanations, especially in high-stakes contexts like healthcare. Overall, the paper's primary contribution lies in delineating clear requirements for effective XAI explanations, facilitating human-AI collaboration across diverse domains.

Authors

  • Jan Stodt
    Institute for Data Science, Cloud Computing, and IT Security, Furtwangen University, Furtwangen, Germany.
  • Christoph Reich
    Department of Internal Medicine III, University of Heidelberg, Im Neuenheimer Feld 410, 69120, Heidelberg, Germany.
  • Martin Knahl
    Institute for Data Science, Cloud Computing, and IT Security, Furtwangen University, Furtwangen, Germany.