Comparison of Diagnostic Performance Between Large Language Models and Veterinary Evaluators in Feline Ocular Diseases Based on Clinical Summaries and Anterior Segment Photographs.
Journal:
Veterinary ophthalmology
Published Date:
Jul 26, 2025
Abstract
To evaluate the diagnostic performance of ChatGPT-4.5 and ChatGPT-4o in comparison with experienced and novice veterinary ophthalmologists in diagnosing feline ocular disease. Sixty standardized feline ophthalmology cases, each involving an isolated ocular condition without concurrent systemic disease, were selected from institutional and private archives and presented in a structured format. Each case included a brief clinical summary and an anterior segment image. Two experienced ophthalmologists, two novices, and two artificial intelligence (AI) models (ChatGPT-4.5 and ChatGPT-4o) independently evaluated the cases. Human evaluators were allotted a maximum of 3 min per case. Diagnostic accuracy, interobserver agreement (%), Cohen's kappa coefficients, and Fisher's exact tests were used for comparative analysis. Highest accuracy was observed in the Experienced 1 (96.7%), followed by ChatGPT-4.5 (90.0%), ChatGPT-4o and Experienced 2 (83.3%), Novice 1 (66.7%), and Novice 2 (56.7%). ChatGPT-4.5 showed strong agreement with ChatGPT-4o (93.3%) and achieved the highest kappa score (κ = 0.47). No statistically significant differences were observed between ChatGPT-4.5 and Experienced ophthalmologists. The AI models significantly outperformed novice evaluators in both accuracy and agreement. ChatGPT-4.5 demonstrated diagnostic performance closely aligned with experienced veterinary ophthalmologists, particularly in the context of feline ocular disease. These findings support the potential of ChatGPT to assist in clinical decision-making, especially in settings with limited specialist availability.
Authors
Keywords
No keywords available for this article.