Explainable artificial intelligence for predicting medical students' performance in comprehensive assessments.
Journal:
Scientific reports
Published Date:
Jul 3, 2025
Abstract
Comprehensive medical assessments are critical for evaluating clinical proficiency in medical education; however, these administrations impose significant institutional burdens, financial costs, and psychological strain on students. While Artificial intelligence (AI) holds transformative potential for predictive analytics, existing models lack the interpretability and reliability required for educational decision-making. To address this gap, a machine learning (ML) framework enhanced with explainable AI (XAI) was developed to predict medical students' performance on comprehensive assessments by integrating academic metrics and non-academic attributes. This retrospective cohort study validated the framework across three universities using two high-stakes assessments: the Comprehensive Medical Pre-Internship Examination (CMPIE; n = 997 students, two-month prediction horizon) and the Clinical Competence Assessment (CCAs; n = 777 students, one-year horizon). A stacking meta-model that combined ensemble techniques (Random Forest, Adaptive Boosting, XGBoost) demonstrated outstanding discriminative performance, with AUC-ROC values of 0.97 (CMPIEs) and 0.99 (CCAs) as well as F1-scores (0.966, 0.994). In this framework, SHapley Additive exPlanations (SHAP) provided granular insights into model logic by identifying high-impact courses as dominant predictors of success and individualized risk profiles. These insights empower educators to prioritize curriculum reforms and implement early interventions for at-risk students while delivering personalized feedback for learners to enhance learning outcomes.