Exploring Explainable AI Techniques for Text Classification in Healthcare: A Scoping Review.

Journal: Studies in health technology and informatics
Published Date:

Abstract

Text classification plays an essential role in the medical domain by organizing and categorizing vast amounts of textual data through machine learning (ML) and deep learning (DL). The adoption of Artificial Intelligence (AI) technologies in healthcare has raised concerns about the interpretability of AI models, often perceived as "black boxes." Explainable AI (XAI) techniques aim to mitigate this issue by elucidating AI model decision-making process. In this paper, we present a scoping review exploring the application of different XAI techniques in medical text classification, identifying two main types: model-specific and model-agnostic methods. Despite some positive feedback from developers, formal evaluations with medical end users of these techniques remain limited. The review highlights the necessity for further research in XAI to enhance trust and transparency in AI-driven decision-making processes in healthcare.

Authors

  • Ibrahim Alaa Eddine Madi
    Sorbonne Université, Université Sorbonne Paris Nord, INSERM, Laboratoire d'Informatique Médicale et d'Ingénierie des connaissances en e-Santé, LIMICS, Paris, France.
  • Akram Redjdal
    Sorbonne Université, Université Sorbonne Paris Nord, Inserm, UMR S_1142, LIMICS, Paris, France.
  • Jacques Bouaud
    AP-HP, DRCD, Paris, France.
  • Brigitte Seroussi
    Sorbonne Universités, UPMC Université Paris 06, UMR_S 1142, LIMICS, Paris, France.