What's in a Name? Experimental Evidence of Gender Bias in Recommendation Letters Generated by ChatGPT.

Journal: Journal of medical Internet research
Published Date:

Abstract

BACKGROUND: Artificial intelligence chatbots such as ChatGPT (OpenAI) have garnered excitement about their potential for delegating writing tasks ordinarily performed by humans. Many of these tasks (eg, writing recommendation letters) have social and professional ramifications, making the potential social biases in ChatGPT's underlying language model a serious concern.

Authors

  • Deanna M Kaplan
    Department of Family and Preventive Medicine, Emory University School of Medicine, Atlanta, GA, United States.
  • Roman Palitsky
    Emory Spiritual Health, Woodruff Health Science Center, Emory University, Atlanta, GA, United States.
  • Santiago J Arconada Alvarez
    Emory University School of Medicine, Atlanta, GA, United States.
  • Nicole S Pozzo
    Department of Family and Preventive Medicine, Emory University School of Medicine, Atlanta, GA, United States.
  • Morgan N Greenleaf
    Emory University School of Medicine, Atlanta, GA, United States.
  • Ciara A Atkinson
    Department of Campus Recreation, University of Arizona, Tucson, AZ, United States.
  • Wilbur A Lam
    The Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology & Emory University, Atlanta, GA, USA. wilbur.lam@emory.edu.