Careful design of Large Language Model pipelines enables expert-level retrieval of evidence-based information from syntheses and databases.

Journal: PloS one
PMID:

Abstract

Wise use of evidence to support efficient conservation action is key to tackling biodiversity loss with limited time and resources. Evidence syntheses provide key recommendations for conservation decision-makers by assessing and summarising evidence, but are not always easy to access, digest, and use. Recent advances in Large Language Models (LLMs) present both opportunities and risks in enabling faster and more intuitive systems to access evidence syntheses and databases. Such systems for natural language search and open-ended evidence-based responses are pipelines comprising many components. Most critical of these components are the LLM used and how evidence is retrieved from the database. We evaluate the performance of ten LLMs across six different database retrieval strategies against human experts in answering synthetic multiple-choice question exams on the effects of conservation interventions using the Conservation Evidence database. We found that LLM performance was comparable with human experts over 45 filtered questions, both in correctly answering them and retrieving the document used to generate them. Across 1867 unfiltered questions, LLM performance demonstrated a level of conservation-specific knowledge, but this varied across topic areas. A hybrid retrieval strategy that combines keywords and vector embeddings performed best by a substantial margin. We also tested against a state-of-the-art previous generation LLM which was outperformed by all ten current models - including smaller, cheaper models. Our findings suggest that, with careful domain-specific design, LLMs could potentially be powerful tools for enabling expert-level use of evidence syntheses and databases in different disciplines. However, general LLMs used 'out-of-the-box' are likely to perform poorly and misinform decision-makers. By establishing that LLMs exhibit comparable performance with human synthesis experts on providing restricted responses to queries of evidence syntheses and databases, future work can build on our approach to quantify LLM performance in providing open-ended responses.

Authors

  • Radhika Iyer
    Department of Zoology, University of Cambridge, Cambridge United Kingdom.
  • Alec Philip Christie
    Department of Zoology, University of Cambridge, Cambridge United Kingdom.
  • Anil Madhavapeddy
    Department of Computer Science and Technology, University of Cambridge, Cambridge CB3 0FD, UK.
  • Sam Reynolds
    Department of Zoology, University of Cambridge, Cambridge United Kingdom.
  • William Sutherland
    Department of Zoology, University of Cambridge, Cambridge United Kingdom.
  • Sadiq Jaffer
    Department of Computer Science and Technology, University of Cambridge, Cambridge CB3 0FD, UK.