On Algorithmic Fairness in Medical Practice.

Journal: Cambridge quarterly of healthcare ethics : CQ : the international journal of healthcare ethics committees
Published Date:

Abstract

The application of machine-learning technologies to medical practice promises to enhance the capabilities of healthcare professionals in the assessment, diagnosis, and treatment, of medical conditions. However, there is growing concern that algorithmic bias may perpetuate or exacerbate existing health inequalities. Hence, it matters that we make precise the different respects in which algorithmic bias can arise in medicine, and also make clear the normative relevance of these different kinds of algorithmic bias for broader questions about justice and fairness in healthcare. In this paper, we provide the building blocks for an account of algorithmic bias and its normative relevance in medicine.

Authors

  • Thomas Grote
    University of Tübingen, Cluster of Excellence: "Machine Learning: New Perspectives for Science", Ethics and Philosophy Lab, Maria von Linden Str. 6, D-72076 Tübingen, Germany. Electronic address: thomas.grote@uni-tuebingen.de.
  • Geoff Keeling
    Leverhulme Centre for the Future of Intelligence, University of Cambridge, Cambridge, UK.