Should AI models be explainable to clinicians?

Journal: Critical care (London, England)
PMID:

Abstract

In the high-stakes realm of critical care, where daily decisions are crucial and clear communication is paramount, comprehending the rationale behind Artificial Intelligence (AI)-driven decisions appears essential. While AI has the potential to improve decision-making, its complexity can hinder comprehension and adherence to its recommendations. "Explainable AI" (XAI) aims to bridge this gap, enhancing confidence among patients and doctors. It also helps to meet regulatory transparency requirements, offers actionable insights, and promotes fairness and safety. Yet, defining explainability and standardising assessments are ongoing challenges and balancing performance and explainability can be needed, even if XAI is a growing field.

Authors

  • Gwénolé Abgrall
    AP-HP, Service de Médecine Intensive-Réanimation, Hôpital de Bicêtre, DMU 4 CORREVE, Inserm UMR S_999, FHU SEPSIS, CARMAS, Université Paris-Saclay, 78 Rue du Général Leclerc, 94270, Le Kremlin-Bicêtre, France. gwenoleabgrall@gmail.com.
  • Andre L Holder
    Cardiopulmonary Research Laboratory, Department of Critical Care Medicine, University of Pittsburgh, Pittsburgh, PA.
  • Zaineb Chelly Dagdia
    Laboratoire DAVID, Université Versailles Saint-Quentin-en-Yvelines, 78035, Versailles, France.
  • Karine Zeitouni
    Laboratoire DAVID, University of Versailles Saint-Quentin-en-Yvelines, 78035 Versailles, France.
  • Xavier Monnet
    AP-HP, Service de Médecine Intensive-Réanimation, Hôpital de Bicêtre, DMU 4 CORREVE, Inserm UMR S_999, FHU SEPSIS, CARMAS, Université Paris-Saclay, 78 Rue du Général Leclerc, 94270, Le Kremlin-Bicêtre, France.