A Clinically Practical and Interpretable Deep Model for ICU Mortality Prediction with External Validation.

Journal: AMIA ... Annual Symposium proceedings. AMIA Symposium
PMID:

Abstract

Deep learning models are increasingly studied in the field of critical care. However, due to the lack of external validation and interpretability, it is difficult to generalize deep learning models in critical care senarios. Few works have validated the performance of the deep learning models with external datasets. To address this, we propose a clinically practical and interpretable deep model for intensive care unit (ICU) mortality prediction with external validation. We use the newly published dataset Philips eICU to train a recurrent neural network model with two-level attention mechanism, and use the MIMIC III dataset as the external validation set to verify the model performance. This model achieves a high accuracy (AUC = 0.855 on the external validation set) and have good interpretability. Based on this model, we develop a system to support clinical decision-making in ICUs.

Authors

  • Yanni Kang
    PingAn Health Technology, Beijing, China.
  • Xiaoyu Jia
    PingAn Health Technology, Beijing, China.
  • Kaifei Wang
    The General Hospital of the People's Liberation Army of China, Beijing, China.
  • Yiying Hu
    PingAn Health Technology, Beijing, China.
  • Jianying Guo
    PingAn Health Technology, Beijing, China.
  • Lin Cong
    PingAn Health Technology, Beijing, China.
  • Xiang Li
    Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States.
  • Guotong Xie
    Ping An Health Technology, Beijing, China.