AI-generated faces influence gender stereotypes and racial homogenization.

Journal: Scientific reports
PMID:

Abstract

Text-to-image generative AI models such as Stable Diffusion are used daily by millions worldwide. However, the extent to which these models exhibit racial and gender stereotypes is not yet fully understood. Here, we document significant biases in Stable Diffusion across six races, two genders, 32 professions, and eight attributes. Additionally, we examine the degree to which Stable Diffusion depicts individuals of the same race as being similar to one another. This analysis reveals significant racial homogenization, e.g., depicting nearly all Middle Eastern men as bearded, brown-skinned, and wearing traditional attire. We then propose debiasing solutions that allow users to specify the desired distributions of race and gender when generating images while minimizing racial homogenization. Finally, using a preregistered survey experiment, we find evidence that being presented with inclusive AI-generated faces reduces people's racial and gender biases, while being presented with non-inclusive ones increases such biases, regardless of whether the images are labeled as AI-generated. Taken together, our findings emphasize the need to address biases and stereotypes in text-to-image models.

Authors

  • Nouar AlDahoul
    Faculty of Computer Science and Information Technology, University of Malaya, Kuala Lumpur, Malaysia.
  • Talal Rahwan
    Computer Science, Science Division, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates.
  • Yasir Zaki
    Division of Science, New York University Abu Dhabi, Abu Dhabi, UAE. yasir.zaki@nyu.edu.