Accelerating clinical evidence synthesis with large language models.

Journal: NPJ digital medicine
Published Date:

Abstract

Clinical evidence synthesis largely relies on systematic reviews (SR) of clinical studies from medical literature. Here, we propose a generative artificial intelligence (AI) pipeline named TrialMind to streamline study search, study screening, and data extraction tasks in SR. We chose published SRs to build TrialReviewBench, which contains 100 SRs and 2,220 clinical studies. For study search, it achieves high recall rates (Ours 0.711-0.834 v.s. Human baseline 0.138-0.232). For study screening, TrialMind beats previous document ranking methods in a 1.5-2.6 fold change. For data extraction, it outperforms a GPT-4's accuracy by 16-32%. In a pilot study, human-AI collaboration with TrialMind improved recall by 71.4% and reduced screening time by 44.2%, while in data extraction, accuracy increased by 23.5% with a 63.4% time reduction. Medical experts preferred TrialMind's synthesized evidence over GPT-4's in 62.5%-100% of cases. These findings show the promise of accelerating clinical evidence synthesis driven by human-AI collaboration.

Authors

  • Zifeng Wang
    Department of ECE, Northeastern University, Boston, Massachusetts, United States.
  • Lang Cao
    Department of Computer Science, University of Illinois Urbana-Champaign, Urbana, IL, United States.
  • Benjamin Danek
    Siebel School of Computing and Data Science, University of Illinois Urbana-Champaign, Urbana, IL, USA.
  • Qiao Jin
    National Library of Medicine, National Institutes of Health, Bethesda, MD, USA.
  • Zhiyong Lu
    National Center for Biotechnology Information, Bethesda, MD 20894 USA.
  • Jimeng Sun
    College of Computing Georgia Institute of Technology Atlanta, GA, USA.

Keywords

No keywords available for this article.