Informing clinical assessment by contextualizing post-hoc explanations of risk prediction models in type-2 diabetes.

Journal: Artificial intelligence in medicine
Published Date:

Abstract

Medical experts may use Artificial Intelligence (AI) systems with greater trust if these are supported by 'contextual explanations' that let the practitioner connect system inferences to their context of use. However, their importance in improving model usage and understanding has not been extensively studied. Hence, we consider a comorbidity risk prediction scenario and focus on contexts regarding the patients' clinical state, AI predictions about their risk of complications, and algorithmic explanations supporting the predictions. We explore how relevant information for such dimensions can be extracted from Medical guidelines to answer typical questions from clinical practitioners. We identify this as a question answering (QA) task and employ several state-of-the-art Large Language Models (LLM) to present contexts around risk prediction model inferences and evaluate their acceptability. Finally, we study the benefits of contextual explanations by building an end-to-end AI pipeline including data cohorting, AI risk modeling, post-hoc model explanations, and prototyped a visual dashboard to present the combined insights from different context dimensions and data sources, while predicting and identifying the drivers of risk of Chronic Kidney Disease (CKD) - a common type-2 diabetes (T2DM) comorbidity. All of these steps were performed in deep engagement with medical experts, including a final evaluation of the dashboard results by an expert medical panel. We show that LLMs, in particular BERT and SciBERT, can be readily deployed to extract some relevant explanations to support clinical usage. To understand the value-add of the contextual explanations, the expert panel evaluated these regarding actionable insights in the relevant clinical setting. Overall, our paper is one of the first end-to-end analyses identifying the feasibility and benefits of contextual explanations in a real-world clinical use case. Our findings can help improve clinicians' usage of AI models.

Authors

  • Shruthi Chari
    Rensselaer Polytechnic Institute, Troy, NY.
  • Prasant Acharya
    Rensselaer Polytechnic Institute, 110 8th St, Troy, 12180, NY, USA.
  • Daniel M Gruen
    IBM Research, Cambridge, MA.
  • Olivia Zhang
    Center for Computational Health, IBM Research, 1101 Kitchawan Rd, Yorktown Heights, 10598, NY, USA.
  • Elif K Eyigoz
    Center for Computational Health, IBM Research, 1101 Kitchawan Rd, Yorktown Heights, 10598, NY, USA.
  • Mohamed Ghalwash
    Center for Computational Health, IBM Research, Yorktown Heights, NY, USA.
  • Oshani Seneviratne
    Rensselaer Polytechnic Institute, Troy, NY.
  • Fernando Suarez Saiz
    IBM Watson Health, IBM Corporation, Cambridge, MA.
  • Pablo Meyer
    Department of Biology, Center for Genomics & Systems Biology, New York University, New York, NY 10003, Perinatology Research Branch, Eunice Kennedy Shriver National Institute of Child Health and Human Development, NIH, Bethesda, MD and Detroit, MI 48201, USA, IBM Thomas J. Watson Research Center, Yorktown Heights, NY 10598, Computer Science Department, Courant institute of Mathematical Sciences, New York University, New York, NY 10012 and Department of Computer Science, Wayne State University, Detroit, MI 48202, USA.
  • Prithwish Chakraborty
    Center for Computational Health, IBM Research, Yorktown Heights, NY, USA.
  • Deborah L McGuinness
    Rensselaer Polytechnic Institute, Troy, NY.