A pilot evaluation of the diagnostic accuracy of ChatGPT-3.5 for multiple sclerosis from case reports.

Journal: Translational neuroscience
Published Date:

Abstract

The limitation of artificial intelligence (AI) large language models to diagnose diseases from the perspective of patient safety remains underexplored and potential challenges, such as diagnostic errors and legal challenges, need to be addressed. To demonstrate the limitations of AI, we used ChatGPT-3.5 developed by OpenAI, as a tool for medical diagnosis using text-based case reports of multiple sclerosis (MS), which was selected as a prototypic disease. We analyzed 98 peer-reviewed case reports selected based on free-full text availability and published within the past decade (2014-2024), excluding any mention of an MS diagnosis to avoid bias. ChatGPT-3.5 was used to interpret clinical presentations and laboratory data from these reports. The model correctly diagnosed MS in 77 cases, achieving an accuracy rate of 78.6%. However, the remaining 21 cases were misdiagnosed, highlighting the model's limitations. Factors contributing to the errors include variability in data presentation and the inherent complexity of MS diagnosis, which requires imaging modalities in addition to clinical presentations and laboratory data. While these findings suggest that AI can support disease diagnosis and healthcare providers in decision-making, inadequate training with large datasets may lead to significant inaccuracies. Integrating AI into clinical practice necessitates rigorous validation and robust regulatory frameworks to ensure responsible use.

Authors

  • Anika Joseph
    Health Sciences Program, University of Ottawa, 75 Laurier Ave E, Ottawa, ON K1N 6N5, Canada.
  • Kevin Joseph
    Biomedical Science Program, University of Ottawa, 75 Laurier Ave E, Ottawa, ON K1N 6N5, Canada.
  • Angelyn Joseph
    Merivale High School, 1755 Merivale Rd, Nepean, ON K2G 1E2, Canada.

Keywords

No keywords available for this article.