Interpretable Deep Models for ICU Outcome Prediction.

Journal: AMIA ... Annual Symposium proceedings. AMIA Symposium
Published Date:

Abstract

Exponential surge in health care data, such as longitudinal data from electronic health records (EHR), sensor data from intensive care unit (ICU), etc., is providing new opportunities to discover meaningful data-driven characteristics and patterns ofdiseases. Recently, deep learning models have been employedfor many computational phenotyping and healthcare prediction tasks to achieve state-of-the-art performance. However, deep models lack interpretability which is crucial for wide adoption in medical research and clinical decision-making. In this paper, we introduce a simple yet powerful knowledge-distillation approach called interpretable mimic learning, which uses gradient boosting trees to learn interpretable models and at the same time achieves strong prediction performance as deep learning models. Experiment results on Pediatric ICU dataset for acute lung injury (ALI) show that our proposed method not only outperforms state-of-the-art approaches for morality and ventilator free days prediction tasks but can also provide interpretable models to clinicians.

Authors

  • Zhengping Che
    University of Southern California, Los Angeles, CA.
  • Sanjay Purushotham
    University of Southern California, Los Angeles, CA, USA.
  • Robinder Khemani
    Children's Hospital Los Angeles, Los Angeles, CA, USA.
  • Yan Liu
    Department of Clinical Microbiology, Shanghai Tenth People's Hospital, School of Medicine, Tongji University, Shanghai, 200072, People's Republic of China.