Explainable artificial intelligence model for mortality risk prediction in the intensive care unit: a derivation and validation study.

Journal: Postgraduate medical journal
Published Date:

Abstract

BACKGROUND: The lack of transparency is a prevalent issue among the current machine-learning (ML) algorithms utilized for predicting mortality risk. Herein, we aimed to improve transparency by utilizing the latest ML explicable technology, SHapley Additive exPlanation (SHAP), to develop a predictive model for critically ill patients.

Authors

  • Chang Hu
    Department of Critical Care Medicine, Zhongnan Hospital of Wuhan University, Wuhan, Hubei, China.
  • Chao Gao
    College of Marine and Environmental Sciences, Tianjin University of Science and Technology, Tianjin 300457, China.
  • Tianlong Li
    State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150001, China.
  • Chang Liu
    Key Lab of Cell Differentiation and Apoptosis of Ministry of Education, Shanghai Jiao Tong University School of Medicine, Shanghai, China.
  • Zhiyong Peng
    Department of Critical Care Medicine, Zhongnan Hospital of Wuhan University, Wuhan 430071, Hubei, China.