Personalized health monitoring using explainable AI: bridging trust in predictive healthcare.

Journal: Scientific reports
Published Date:

Abstract

AI has propelled the potential for moving toward personalized health and early prediction of diseases. Unfortunately, a significant limitation of many of these deep learning models is that they are not interpretable, restricting their clinical utility and undermining trust by clinicians. However, all existing methods are non-informative because they report generic or post-hoc explanations, and few or none support patient-specific, accurate, individualized patient-level explanations. Furthermore, existing approaches are often restricted to static, limited-domain datasets and are not generalizable across various healthcare scenarios. To tackle these problems, we propose a new deep learning approach called PersonalCareNet for personalized health monitoring based on the MIMIC-III clinical dataset. Our system jointly models convolutional neural networks with attention (CHARMS) and employs SHAP (Shapley Additive exPlanations) to obtain global and patient-specific model interpretability. We believe the model, enabled to leverage many clinical features, would offer clinically interpretable insights into the contribution of features while supporting real-time risk prediction, thus increasing transparency and instilling clinically-oriented trust in the model. We provide an extensive evaluation that shows PersonalCareNet achieves 97.86% accuracy, exceeding multiple notable SoTA healthcare risk prediction models. Explainability at Both Local and Global Level The framework offers explainability at local (using various matrix heatmaps for diagnosing models, such as force plots, SHAP summary visualizations, and confusion matrix-based diagnostics) and also at a global level through feature importance plots and Top-N list visualizations. As a result, we show quantitative results, demonstrating that much of the improvement can be achieved without paying a high price for interpretability. We have proposed a cost-effective and systematic approach as an AI-based platform that is scalable, accurate, transparent, and interpretable for critical care and personalized diagnostics. PersonalCareNet, by filling the void between performance and interpretability, promises a significant advancement in the field of reliable and clinically validated predictive healthcare AI. The design allows for additional extension to multiple data types and real-time deployment at the edge, creating a broader impact and adaptability.

Authors

  • M Sree Vani
    Department of CSE, BVRIT Hyderabad College of Engineering, For Women, 500090, Hyderabad, India.
  • Rayapati Venkata Sudhakar
    Department of Computer science and Engineering, Geethanjali college of Engineering and Technology, Medchal District,Cheeryal, Hyderabad, 500043, Telangana, India. rayapati1113@gmail.com.
  • A Mahendar
    Department of CSE (Date Science), CMR Technical Campus, kandlakoya, Hyderabad, Telangana 501401, India.
  • Sukanya Ledalla
    Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Bowrampet, Hyderabad, 500043, Telangana, India.
  • Marepalli Radha
    CSE Department, CVR College of Engineering, Hyderabad, Telangana, India.
  • M Sunitha
    CSE, Vasavi College of Engineering, Hyderabad, Telangana, India.