Interpretable machine learning for precision cognitive aging.

Journal: Frontiers in computational neuroscience
Published Date:

Abstract

INTRODUCTION: Machine performance has surpassed human capabilities in various tasks, yet the opacity of complex models limits their adoption in critical fields such as healthcare. Explainable AI (XAI) has emerged to address this by enhancing transparency and trust in AI decision-making. However, a persistent gap exists between interpretability and performance, as black-box models, such as deep neural networks, often outperform white-box models, such as regression-based approaches. To bridge this gap, the Explainable Boosting Machine (EBM), a class of generalized additive models has been introduced, combining the strengths of interpretable and high-performing models. EBM may be particularly well-suited for cognitive health research, where traditional models struggle to capture nonlinear effects in cognitive aging and account for inter- and intra-individual variability.

Authors

  • Abdoul Jalil Djiberou Mahamadou
    Stanford Center for Biomedical Ethics, Stanford University, Stanford, CA, United States.
  • Emma A Rodrigues
    School of Interactive Arts and Technology, Simon Fraser University, Surrey, BC, Canada.
  • Vasily Vakorin
    Department of Biomedical Physiology and Kinesiology, Simon Fraser University, Burnaby, BC, Canada.
  • Violaine Antoine
    CNRS ENSMSE LIMOS, Clermont Auvergne University, Clermont-Ferrand, France.
  • Sylvain Moreno
    School of Interactive Arts and Technology, Simon Fraser University, Surrey, BC, Canada.

Keywords

No keywords available for this article.