A Mixed-Methods Evaluation of LLM-Based Chatbots for Menopause.

Journal: Studies in health technology and informatics
Published Date:

Abstract

The integration of Large Language Models (LLMs) into healthcare settings has gained significant attention, particularly for question-answering tasks. Given the high-stakes nature of healthcare, it is essential to ensure that LLM-generated content is accurate and reliable to prevent adverse outcomes. However, the development of robust evaluation metrics and methodologies remains a matter of much debate. We examine the performance of publicly available LLM-based chatbots for menopause-related queries, using a mixed-methods approach to evaluate safety, consensus, objectivity, reproducibility, and explainability. Our findings highlight the promise and limitations of traditional evaluation metrics for sensitive health topics. We propose the need for customized and ethically grounded evaluation frameworks to assess LLMs to advance safe and effective use in healthcare.

Authors

  • Roshini Deva
    Emory University, Atlanta, GA, United States.
  • Manvi S
    Emory University, Atlanta, GA, United States.
  • Jasmine Zhou
    Emory University, Atlanta, GA, United States.
  • Elizabeth Britton Chahine
    Emory University, Atlanta, GA, United States.
  • Agena Davenport-Nicholson
    Emory University, Atlanta, GA, United States.
  • Nadi Nina Kaonga
    Emory University, Atlanta, GA, United States.
  • Selen Bozkurt
    Department of Biostatistics and Medical Informatics, Akdeniz University Faculty of Medinice, 48000 Antalya, Turkey.
  • Azra Ismail
    Emory University, Atlanta, GA, United States.