Emotional prompting amplifies disinformation generation in AI large language models.

Journal: Frontiers in artificial intelligence
Published Date:

Abstract

INTRODUCTION: The emergence of artificial intelligence (AI) large language models (LLMs), which can produce text that closely resembles human-written content, presents both opportunities and risks. While these developments offer significant opportunities for improving communication, such as in health-related crisis communication, they also pose substantial risks by facilitating the creation of convincing fake news and disinformation. The widespread dissemination of AI-generated disinformation adds complexity to the existing challenges of the ongoing infodemic, significantly affecting public health and the stability of democratic institutions.

Authors

  • Rasita Vinay
    Institute of Biomedical Ethics and History of Medicine, University of Zurich, Zurich, Switzerland.
  • Giovanni Spitale
    Institute of Biomedical Ethics and History of Medicine, University of Zurich, Zürich, Switzerland.
  • Nikola Biller-Andorno
    Institute of Biomedical Ethics and History of Medicine, University of Zurich, Zürich, Switzerland.
  • Federico Germani
    Institute of Biomedical Ethics and History of Medicine, University of Zurich, Zürich, Switzerland.

Keywords

No keywords available for this article.