Explainability of deep neural networks for MRI analysis of brain tumors.

Journal: International journal of computer assisted radiology and surgery
Published Date:

Abstract

PURPOSE: Artificial intelligence (AI), in particular deep neural networks, has achieved remarkable results for medical image analysis in several applications. Yet the lack of explainability of deep neural models is considered the principal restriction before applying these methods in clinical practice.

Authors

  • Ramy A Zeineldin
    Research Group Computer Assisted Medicine (CaMed), Reutlingen University, 72762, Reutlingen, Germany. Ramy.Zeineldin@Reutlingen-University.DE.
  • Mohamed E Karar
    Faculty of Electronic Engineering (FEE), Menoufia University, Menouf, 32952, Egypt.
  • Ziad Elshaer
    Department of Neurosurgery, University of Ulm, 89312, Günzburg, Germany.
  • Jan Coburger
    Department of Neurosurgery, University of Ulm, 89312, Günzburg, Germany.
  • Christian R Wirtz
    Department of Neurosurgery, University of Ulm, 89312, Günzburg, Germany.
  • Oliver Burgert
    Research Group Computer Assisted Medicine (CaMed), Reutlingen University, 72762, Reutlingen, Germany.
  • Franziska Mathis-Ullrich
    Institute for Anthropomatics and Robotics, Karlsruhe Institute of Technology (KIT), 76131, Karlsruhe, Germany.