Specific media literacy tips improve AI-generated visual misinformation discernment.

Journal: Cognitive research: principles and implications
Published Date:

Abstract

Images generated using artificial intelligence (AI) have become increasingly realistic, sparking discussions and fears about an impending "infodemic" where we can no longer trust what we see on the internet. In this preregistered study, we examine whether providing specific media literacy tips about how to spot AI-generated images can reduce susceptibility to AI-generated visual misinformation (AIVM). Participants were randomly assigned to one of three conditions, reading specific media literacy tips, general media literacy tips, or no media literacy tips (control). The general tips provided tips on how to spot misinformation, while the specific tips provided more detailed tips for how to detect AIVM. Results showed that specific tips increased headline discernment between true and false information more than general tips. Both media literacy interventions reduced belief in AIVM compared to control, but specific tips reduced belief in AIVM more than general tips. Finally, both specific and general tips also reduced belief in real headlines compared to control, with no difference between them. In an information environment that sees increasing prevalence of AIVM, it may be worth being specific about how to detect misinformation online rather than only providing general information.

Authors

  • Sean Guo
    Department of Psychology, The University of Hong Kong, Pokfulam, Hong Kong SAR, China.
  • Briony Swire-Thompson
    College of Social Sciences and Humanities, Northeastern University, Boston, MA, USA.
  • Xiaoqing Hu
    Department of Radiology, The First Hospital of Shanxi Medical University, Taiyuan 030001, China.