Artificial intelligence and clinical decision support: clinicians' perspectives on trust, trustworthiness, and liability.

Journal: Medical law review
Published Date:

Abstract

Artificial intelligence (AI) could revolutionise health care, potentially improving clinician decision making and patient safety, and reducing the impact of workforce shortages. However, policymakers and regulators have concerns over whether AI and clinical decision support systems (CDSSs) are trusted by stakeholders, and indeed whether they are worthy of trust. Yet, what is meant by trust and trustworthiness is often implicit, and it may not be clear who or what is being trusted. We address these lacunae, focusing largely on the perspective(s) of clinicians on trust and trustworthiness in AI and CDSSs. Empirical studies suggest that clinicians' concerns about their use include the accuracy of advice given and potential legal liability if harm to a patient occurs. Onora O'Neill's conceptualisation of trust and trustworthiness provides the framework for our analysis, generating a productive understanding of clinicians' reported trust issues. Through unpacking these concepts, we gain greater clarity over the meaning ascribed to them by stakeholders; delimit the extent to which stakeholders are talking at cross purposes; and promote the continued utility of trust and trustworthiness as useful concepts in current debates around the use of AI and CDSSs.

Authors

  • Caroline Jones
    Hillary Rodham Clinton School of Law, Swansea University, Swansea, UK.
  • James Thornton
    Nottingham Law School, Nottingham Trent University, Nottingham, UK.
  • Jeremy C Wyatt
    Wessex Institute, University of Southampton, Southampton, UK.