Use of Retrieval-Augmented Large Language Model for COVID-19 Fact-Checking: Development and Usability Study.

Journal: Journal of medical Internet research
PMID:

Abstract

BACKGROUND: The COVID-19 pandemic has been accompanied by an "infodemic," where the rapid spread of misinformation has exacerbated public health challenges. Traditional fact-checking methods, though effective, are time-consuming and resource-intensive, limiting their ability to combat misinformation at scale. Large language models (LLMs) such as GPT-4 offer a more scalable solution, but their susceptibility to generating hallucinations-plausible yet incorrect information-compromises their reliability.

Authors

  • Hai Li
    School of Economics and Management, Shanghai University of Sport, Shanghai, China.
  • Jingyi Huang
    School of Economics and Management, Shanghai University of Sport, Shanghai, China.
  • Mengmeng Ji
    Department of Surgery, Division of Public Health Sciences, Washington University School of Medicine in St. Louis, St. Louis, MO, United States.
  • Yuyi Yang
    Division of Computational and Data Sciences, Washington University in St. Louis, St. Louis, MO, United States.
  • Ruopeng An
    Silver School of Social Work, New York University, New York, NY, 10012, USA. Electronic address: ra4605@nyu.edu.