Existential risk narratives about AI do not distract from its immediate harms.

Journal: Proceedings of the National Academy of Sciences of the United States of America
PMID:

Abstract

There is broad consensus that AI presents risks, but considerable disagreement about the nature of those risks. These differing viewpoints can be understood as distinct narratives, each offering a specific interpretation of AI's potential dangers. One narrative focuses on doomsday predictions of AI posing long-term existential risks for humanity. Another narrative prioritizes immediate concerns that AI brings to society today, such as the reproduction of biases embedded into AI systems. A significant point of contention is that the "existential risk" narrative, which is largely speculative, may distract from the less dramatic but real and present dangers of AI. We address this "distraction hypothesis" by examining whether a focus on existential threats diverts attention from the immediate risks AI poses today. In three preregistered, online survey experiments (N = 10,800), participants were exposed to news headlines that either depicted AI as a catastrophic risk, highlighted its immediate societal impacts, or emphasized its potential benefits. Results show that i) respondents are much more concerned with the immediate, rather than existential, risks of AI, and ii) existential risk narratives increase concerns for catastrophic risks without diminishing the significant worries respondents express for immediate harms. These findings provide important empirical evidence to inform ongoing scientific and political debates on the societal implications of AI.

Authors

  • Emma Hoes
    Department of Political Science, University of Zurich, Zurich 8050, Switzerland.
  • Fabrizio Gilardi
    Department of Political Science, University of Zurich, Zurich 8050, Switzerland.