Neurosurgery, Explainable AI, and Legal Liability.

Journal: Advances in experimental medicine and biology
PMID:

Abstract

One of the challenges of AI technologies is its "black box" nature, or the lack of explainability and interpretability of these technologies. This chapter explores whether AI systems in healthcare generally, and in neurosurgery specifically, should be explainable, for what purposes, and whether the current XAI ("explainable AI") approaches and techniques are able to achieve these purposes. The chapter concludes that XAI techniques, at least currently, are not the only and not necessarily the best way to achieve trust in AI and ensure patient autonomy or improved clinical decision, and they are of limited significance in determining liability. Instead, we argue, we need more transparency around AI systems, their training and validation, as this information is likely to better achieve these goals.

Authors

  • Rita Matulionyte
    Senior Lecturer, Macquarie Law School, Macquarie University.
  • Eric Suero Molina
    Computational NeuroSurgery (CNS) Lab, Macquarie Medical School, Faculty of Medicine, Human and Health Sciences, Macquarie University, Sydney, NSW, Australia.
  • Antonio Di Ieva
    Neurosurgery Unit, Department of Clinical Medicine, Faculty of Medicine and Health Sciences, Macquarie University, Sydney, Australia.