The Future Ethics of Artificial Intelligence in Medicine: Making Sense of Collaborative Models.

Journal: Science and engineering ethics
Published Date:

Abstract

This article examines the role of medical doctors, AI designers, and other stakeholders in making applied AI and machine learning ethically acceptable on the general premises of shared decision-making in medicine. Recent policy documents such as the EU strategy on trustworthy AI and the research literature have often suggested that AI could be made ethically acceptable by increased collaboration between developers and other stakeholders. The article articulates and examines four central alternative models of how AI can be designed and applied in patient care, which we call the ordinary evidence model, the ethical design model, the collaborative model, and the public deliberation model. We argue that the collaborative model is the most promising for covering most AI technology, while the public deliberation model is called for when the technology is recognized as fundamentally transforming the conditions for ethical shared decision-making.

Authors

  • Torbjørn Gundersen
    Centre for the Study of Professions, Oslo Metropolitan University, Oslo, Akershus, Norway.
  • Kristine Bærøe
    Department of Global Public Health and Primary Care, University of Bergen, PO Box 7804, N-5020 Bergen, Norway.