Supporting AI-Explainability by Analyzing Feature Subsets in a Machine Learning Model.

Journal: Studies in health technology and informatics
Published Date:

Abstract

Machine learning algorithms become increasingly prevalent in the field of medicine, as they offer the ability to recognize patterns in complex medical data. Especially in this sensitive area, the active usage of a mostly black box is a controversial topic. We aim to highlight how an aggregated and systematic feature analysis of such models can be beneficial in the medical context. For this reason, we introduce a grouped version of the permutation importance analysis for evaluating the influence of entire feature subsets in a machine learning model. In this way, expert-defined subgroups can be evaluated in the decision-making process. Based on these results, new hypotheses can be formulated and examined.

Authors

  • Lucas Plagwitz
    Institute for Translational Psychiatry, University of Münster, Münster, Germany.
  • Alexander Brenner
    Institute of Medical Informatics, University of Münster, Münster, Germany.
  • Michael Fujarski
    Institute of Medical Informatics, University of Münster, Münster, Germany.
  • Julian Varghese
    Institute of Medical Data Science, Otto-von-Guericke University, Magdeburg, Germany.