Comparative evaluation of ChatGPT and LLaMA for reliability, quality, and accuracy in familial Mediterranean fever.
Journal:
European journal of pediatrics
Published Date:
Jul 18, 2025
Abstract
UNLABELLED: Familial Mediterranean fever (FMF) is the most common monogenic autoinflammatory disease. Large language models (LLMs) offer rapid access to medical information. This study evaluated and compared the reliability, quality, and accuracy of ChatGPT-4o and LLaMA-3.1 for FMF. Single Hub and Access Point for Paediatric Rheumatology in Europe (SHARE) and European League Against Rheumatism (EULAR) guidelines were used for question generation and also for answer validation. Thirty-one questions were developed from a clinician's perspective based on the related guidelines. Two pediatric rheumatologists with over 20 years of FMF experience independently and blindly evaluated the responses. Reliability, quality, and accuracy were assessed using the modified DISCERN Scale, Global Quality Score, and the guidelines, respectively. Readability was assessed using multiple established indices. Statistical analyses included the Shapiro-Wilk test to assess normality, followed by paired t-tests for normally distributed scores, and Wilcoxon signed-rank tests for non-normally distributed scores. Both models demonstrated moderate reliability and high response quality. In terms of alignment with the guidelines, LLaMA provided fully aligned, complete, and accurate responses to 51.6% (16/31) of the questions, whereas ChatGPT provided such responses to 80.6% (25/31). While 9.7% (4/31) of LLaMA's responses were entirely contradictory to the guidelines, ChatGPT did not produce any such responses. ChatGPT outperformed LLaMA in terms of accuracy, quality, and reliability, with statistical significance. Readability assessments showed that both LLMs required college-level understanding.