Humor as a window into generative AI bias.

Journal: Scientific reports
PMID:

Abstract

A preregistered audit of 600 images by generative AI across 150 different prompts explores the link between humor and discrimination in consumer-facing AI solutions. When ChatGPT updates images to make them "funnier", the prevalence of stereotyped groups changes. While stereotyped groups for politically sensitive traits (i.e., race and gender) are less likely to be represented after making an image funnier, stereotyped groups for less politically sensitive traits (i.e., older, visually impaired, and people with high body weight groups) are more likely to be represented.

Authors

  • Roger Saumure
    Department of Marketing, The Wharton School, University of Pennsylvania, Philadelphia, PA, USA. saumure@wharton.upenn.edu.
  • Julian De Freitas
    Marketing Unit, Harvard Business School, Boston, MA, USA. jdefreitas@hbs.edu.
  • Stefano Puntoni
    The Wharton School, University of Pennsylvania, 3730 Walnut Street, Philadelphia, PA 19104, USA.