Explainable AI in medical imaging: An overview for clinical practitioners - Saliency-based XAI approaches.

Journal: European journal of radiology
Published Date:

Abstract

Since recent achievements of Artificial Intelligence (AI) have proven significant success and promising results throughout many fields of application during the last decade, AI has also become an essential part of medical research. The improving data availability, coupled with advances in high-performance computing and innovative algorithms, has increased AI's potential in various aspects. Because AI rapidly reshapes research and promotes the development of personalized clinical care, alongside its implementation arises an urgent need for a deep understanding of its inner workings, especially in high-stake domains. However, such systems can be highly complex and opaque, limiting the possibility of an immediate understanding of the system's decisions. Regarding the medical field, a high impact is attributed to these decisions as physicians and patients can only fully trust AI systems when reasonably communicating the origin of their results, simultaneously enabling the identification of errors and biases. Explainable AI (XAI), becoming an increasingly important field of research in recent years, promotes the formulation of explainability methods and provides a rationale allowing users to comprehend the results generated by AI systems. In this paper, we investigate the application of XAI in medical imaging, addressing a broad audience, especially healthcare professionals. The content focuses on definitions and taxonomies, standard methods and approaches, advantages, limitations, and examples representing the current state of research regarding XAI in medical imaging. This paper focuses on saliency-based XAI methods, where the explanation can be provided directly on the input data (image) and which naturally are of special importance in medical imaging.

Authors

  • Katarzyna Borys
    Institute for Artificial Intelligence in Medicine, University Hospital Essen, Girardetstraße 2, 45131 Essen, Germany; Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147 Essen, Germany. Electronic address: Katarzyna.Borys@uk-essen.de.
  • Yasmin Alyssa Schmitt
    Institute for Artificial Intelligence in Medicine, University Hospital Essen, Girardetstraße 2, 45131 Essen, Germany.
  • Meike Nauta
    Institute for Artificial Intelligence in Medicine, University Hospital Essen, Girardetstraße 2, 45131 Essen, Germany; Data Management & Biometrics Group, University of Twente, Drienerlolaan 5, 7522 NB Enschede, The Netherlands.
  • Christin Seifert
    Institute for Artificial Intelligence in Medicine, University Hospital Essen, Girardetstraße 2, 45131 Essen, Germany.
  • Nicole Krämer
    Department of Social Psychology, Media and Communication, University of Duisburg-Essen, Forsthausweg 2, 47057 Duisburg, Germany; Research Center "Trustworthy Data Science and Security", Otto-Hahn-Straße 14, 44227 Dortmund, Germany.
  • Christoph M Friedrich
    Department of Computer Science, University of Applied Sciences and Arts Dortmund, Dortmund, Germany.
  • Felix Nensa
    Institute for AI in Medicine (IKIM), University Hospital Essen, 45131 Essen, Germany.