A Responsible Framework for Assessing, Selecting, and Explaining Machine Learning Models in Cardiovascular Disease Outcomes Among People With Type 2 Diabetes: Methodology and Validation Study.

Journal: JMIR medical informatics
Published Date:

Abstract

BACKGROUND: Building machine learning models that are interpretable, explainable, and fair is critical for their trustworthiness in clinical practice. Interpretability, which refers to how easily a human can comprehend the mechanism by which a model makes predictions, is often seen as a primary consideration when adopting a machine learning model in health care. However, interpretability alone does not necessarily guarantee explainability, which offers stakeholders insights into a model's predicted outputs. Moreover, many existing frameworks for model evaluation focus primarily on maximizing predictive accuracy, overlooking the broader need for interpretability, fairness, and explainability.

Authors

  • Yang Yang
    Department of Gastrointestinal Surgery, The Third Hospital of Hebei Medical University, Shijiazhuang, China.
  • Che-Yi Liao
    H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, 765 Ferst Dr NW, Atlanta, GA, 30332-0001, United States, 1 404-385-3140.
  • Esmaeil Keyvanshokooh
    Department of Information and Operations Management, Mays Business School, Texas A&M University, College Station, TX, United States.
  • Hui Shao
    Department of Pharmaceutical Outcomes and Policy, College of Pharmacy, University of Florida, Gainesville, FL, United States of America.
  • Mary Beth Weber
    Hubert Department of Global Health, Rollins School of Public Health, Emory University, Atlanta, GA, United States.
  • Francisco J Pasquel
    Hubert Department of Global Health, Rollins School of Public Health, Emory University, Atlanta, GA, United States.
  • Gian-Gabriel P Garcia
    Department of Industrial and Operations Engineering, University of Michigan College of Engineering, Ann Arbor, Michigan, USA.