Algorithmic fairness in artificial intelligence for medicine and healthcare.

Journal: Nature biomedical engineering
Published Date:

Abstract

In healthcare, the development and deployment of insufficiently fair systems of artificial intelligence (AI) can undermine the delivery of equitable care. Assessments of AI models stratified across subpopulations have revealed inequalities in how patients are diagnosed, treated and billed. In this Perspective, we outline fairness in machine learning through the lens of healthcare, and discuss how algorithmic biases (in data acquisition, genetic variation and intra-observer labelling variability, in particular) arise in clinical workflows and the resulting healthcare disparities. We also review emerging technology for mitigating biases via disentanglement, federated learning and model explainability, and their role in the development of AI-based software as a medical device.

Authors

  • Richard J Chen
    Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA.
  • Judy J Wang
    Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA.
  • Drew F K Williamson
    Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA.
  • Tiffany Y Chen
    Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA.
  • Jana Lipkova
    Department of Informatics, Technische Universität München, Munich, Germany.
  • Ming Y Lu
    Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA.
  • Sharifa Sahai
    Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA.
  • Faisal Mahmood
    Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA. faisalmahmood@bwh.harvard.edu.