Exploring the Potential of Non-Proprietary Language Models for Analysing Patient-Reported Experiences.

Journal: Studies in health technology and informatics
PMID:

Abstract

Large language models (LLMs) are increasingly being explored for various applications in medical language processing. Due to data privacy issues, it is recommended to apply non-proprietary models that can be run locally. Therefore, this study aims to assess the potentials of non-proprietary language models in analysing patient-reported experiences, focusing on their ability to categorise and provide insights into the reported experiences. Three language models - Gemma 2, Llama 3.2 and Llama 3 - were applied to a dataset of 155 patient-reported texts from PatBox.ch, the Swiss national patient experience reporting system. Gemma 2 performed best among the three models tested with 67% accurate, 17% false and 16 % incomplete extractions for persons; 63% correct, 16% false and 30% incomplete categorizations for events and 86% correct and 14% false results for emotion classification. Although the categorisation tended to be general in nature, relevant key themes were identified by the models, providing valuable insights into the data and revealing possible quality improvements.

Authors

  • Kerstin Denecke
    Institute for Medical Informatics, Bern University of Applied Sciences, Bern, Switzerland.