Reevaluating feature importance in machine learning: concerns regarding SHAP interpretations in the context of the EU artificial intelligence act.

Journal: Water research
Published Date:

Abstract

This paper critically examines the analysis conducted by Maußner et al. on AI analysis, particularly their interpretation of feature importances derived from various machine learning models using SHAP (SHapley Additive exPlanations). Although SHAP aids in interpretability, it is subject to model-specific biases that can misrepresent relationships between variables. The paper emphasizes the lack of ground truth values in feature importance assessments and calls for careful consideration of statistical methodologies, including robust nonparametric approaches. By advocating for the use of Spearman's correlation with p-values and Kendall's tau with p-values, this work aims to strengthen the integrity of findings in machine learning studies, ensuring that conclusions drawn are reliable and actionable.

Authors

  • Yoshiyasu Takefuji
    Faculty of Data Science, Musashino University, 3-3-3 Ariake Koto-ku, Tokyo, 135-8181, Japan.