What is Interpretability?

Journal: Philosophy & technology
Published Date:

Abstract

We argue that artificial networks are explainable and offer a novel theory of interpretability. Two sets of conceptual questions are prominent in theoretical engagements with artificial neural networks, especially in the context of medical artificial intelligence: (1) Are networks , and if so, what does it mean to explain the output of a network? And (2) what does it mean for a network to be ? We argue that accounts of "explanation" tailored specifically to neural networks have ineffectively reinvented the wheel. In response to (1), we show how four familiar accounts of explanation apply to neural networks as they would to any scientific phenomenon. We diagnose the confusion about explaining neural networks within the machine learning literature as an equivocation on "explainability," "understandability" and "interpretability." To remedy this, we distinguish between these notions, and answer (2) by offering a theory and typology of interpretation in machine learning. Interpretation is something one does to an explanation with the aim of producing another, more understandable, explanation. As with explanation, there are various concepts and methods involved in interpretation: or , or , and or . Our account of "interpretability" is consistent with uses in the machine learning literature, in keeping with the philosophy of explanation and understanding, and pays special attention to medical artificial intelligence systems.

Authors

  • Adrian Erasmus
    Institute for the Future of Knowledge, University of Johannesburg, Johannesburg, South Africa.
  • Tyler D P Brunet
    Department of History and Philosophy of Science, University of Cambridge, Free School Ln., Cambridge, CB2 3RH UK.
  • Eyal Fisher
    Cancer Research UK Cambridge Institute, University of Cambridge, Li Ka Shing Centre, Robinson Way, Cambridge, CB2 0RE UK.

Keywords

No keywords available for this article.