A survey of recent methods for addressing AI fairness and bias in biomedicine.

Journal: Journal of biomedical informatics
Published Date:

Abstract

OBJECTIVES: Artificial intelligence (AI) systems have the potential to revolutionize clinical practices, including improving diagnostic accuracy and surgical decision-making, while also reducing costs and manpower. However, it is important to recognize that these systems may perpetuate social inequities or demonstrate biases, such as those based on race or gender. Such biases can occur before, during, or after the development of AI models, making it critical to understand and address potential biases to enable the accurate and reliable application of AI models in clinical settings. To mitigate bias concerns during model development, we surveyed recent publications on different debiasing methods in the fields of biomedical natural language processing (NLP) or computer vision (CV). Then we discussed the methods, such as data perturbation and adversarial learning, that have been applied in the biomedical domain to address bias.

Authors

  • Yifan Yang
    College of Food Science, Sichuan Agricultural University, Ya'an 625014, China.
  • Mingquan Lin
    Department of Population Health Sciences, Weill Cornell Medicine, New York, USA.
  • Han Zhao
  • Yifan Peng
    Department of Population Health Sciences, Weill Cornell Medicine, New York, USA.
  • Furong Huang
    Guangdong Provincial Key Laboratory of Optical Fiber Sensing and Communications, Jinan University, Guangzhou 510632, China; Research Institute of Jinan University in Dongguan, Dongguan 523000, China. Electronic address: furong_huang@jnu.edu.cn.
  • Zhiyong Lu
    National Center for Biotechnology Information, Bethesda, MD 20894 USA.