Assessing socioeconomic bias in machine learning algorithms in health care: a case study of the HOUSES index.

Journal: Journal of the American Medical Informatics Association : JAMIA
PMID:

Abstract

OBJECTIVE: Artificial intelligence (AI) models may propagate harmful biases in performance and hence negatively affect the underserved. We aimed to assess the degree to which data quality of electronic health records (EHRs) affected by inequities related to low socioeconomic status (SES), results in differential performance of AI models across SES.

Authors

  • Young J Juhn
    Department of Pediatric and Adolescent Medicine, Mayo Clinic, Rochester, Minn; Asthma Epidemiology Research Unit, Mayo Clinic, Rochester, Minn. Electronic address: Juhn.young@mayo.edu.
  • Euijung Ryu
    Department of Health Sciences Research, Mayo Clinic, Rochester, MN, 55905, USA.
  • Chung-Il Wi
    Department of Pediatric and Adolescent Medicine, Mayo Clinic, Rochester, Minn; Asthma Epidemiology Research Unit, Mayo Clinic, Rochester, Minn.
  • Katherine S King
    Department of Quantitative Health Sciences, Mayo Clinic, Rochester, Minnesota, USA.
  • Momin Malik
    Carnegie Mellon University, Pittsburgh, PA, United States.
  • Santiago Romero-Brufau
    Mayo Clinic Kern Center for the Science of Health Care Delivery, Mayo Clinic, Minnesota, United States; Department of Biostatistics, Harvard T. H. Chan School of Public Health, Harvard University, Massachussets, United States. Electronic address: RomeroBrufau.Santiago@mayo.edu.
  • Chunhua Weng
    Department of Biomedical Informatics, Columbia University.
  • Sunghwan Sohn
    Department of Artificial Intelligence and Informatics, Mayo Clinic, Rochester, USA.
  • Richard R Sharp
    Biomedical Ethics Program, Mayo Clinic, Rochester, Minnesota, USA.
  • John D Halamka
    Beth Israel Deaconess Medical Center, Boston, MA, United States.