Explainable Versus Interpretable AI in Healthcare: How to Achieve Understanding.

Journal: Studies in health technology and informatics
Published Date:

Abstract

The increasing adoption of deep learning methods has intensified the demand for explanations regarding how AI systems generate their results. This necessity originated primarily in the domain of image processing and has expanded to encompass the complexities of large language models (LLMs), particularly in medical contexts. For example, when LLM-based chatbots provide medical advice, the challenge lies in articulating the rationale behind their recommendations, especially when specific features may not be identifiable. This paper explores the distinction between explanation, interpretation, and understanding within AI-driven decision support systems. By adopting Daniel Dennett's intentional stance, we propose a methodology for analyzing how AI explanations can facilitate deeper user engagement and comprehension. Furthermore, we examine the implications of this methodology for the development and regulation of medical chatbots.

Authors

  • Denis Moser
    Bern University of Applied Sciences, Switzerland.
  • Murat Sariyar
    Bern University of Appl. Sciences, Department of Medical Informatics, Switzerland.