Emotional prompting amplifies disinformation generation in AI large language models.
Journal:
Frontiers in artificial intelligence
Published Date:
Apr 7, 2025
Abstract
INTRODUCTION: The emergence of artificial intelligence (AI) large language models (LLMs), which can produce text that closely resembles human-written content, presents both opportunities and risks. While these developments offer significant opportunities for improving communication, such as in health-related crisis communication, they also pose substantial risks by facilitating the creation of convincing fake news and disinformation. The widespread dissemination of AI-generated disinformation adds complexity to the existing challenges of the ongoing infodemic, significantly affecting public health and the stability of democratic institutions.
Authors
Keywords
No keywords available for this article.