Comparative Performance Analysis of AI Engines in Answering American Board of Surgery In-Training Examination Questions: A Multi-Subspecialty Evaluation.

Journal: Surgical innovation
Published Date:

Abstract

BackgroundThe rapid advancement of artificial intelligence (AI) has led to its increasing application in the medical field, particularly in providing accurate and reliable information for complex medical queries. PurposeThis study evaluates the performance of four AI engines-Perplexity, Chat GPT, DeepSeek, and Gemini in answering 100 multiple-choice questions derived from the American Board of Surgery In-Training Examination (ABSITE). A set of questions focused on five surgical subspecialties including colorectal surgery, acute care and trauma surgery (ACS), upper GI Surgery, breast and endocrine surgery, and hepatopancreatobiliary surgery (HPB).Data collectionWe evaluated these AI engines' ability to provide accurate and focused medical knowledge as the main objective. The research study consisting of a two-month duration was conducted from January 1, 2025, to March 28, 2025. All AI engines received identical questions through then a comparison between correct and wrong responses was performed relative to the ABSITE key answers. Each question was entered manually into the chatbots, ensuring no memory retention bias.Statistical analysisThe researchers conducted their statistical analysis with JASP software for performance evaluation between different subspecialties and AI engines through univariate and multivariate investigations.ResultsAmong the available AI tools, DeepSeek produced the most accurate responses at 74% while Chat GPT delivered 70% accuracy Gemini achieved 69% and Perplexity attained 65%. Results showed that Chat GPT achieved 83.3% accuracy in colorectal surgery yet DeepSeek scored the best at 84.6% and 67.6% for HPB Surgery and ACS respectively. Perplexity achieved a 100% accuracy rate in breast and endocrine surgery which proved to be the highest score recorded throughout the study. The analysis showed that Chat GPT exhibited performance variability between different Surgical subspecialties since it registered significant variations ( < .05), especially in acute care and trauma Surgery. The results of logistic regression indicated that Gemini along with Perplexity scored the most consistent answers among AI systems with a significant odds ratio of 2.5 ( < .01). AI engines show different combinations of precision and reliability when solving medical questions about surgery yet DeepSeek stands out by remaining the most reliable overall.ConclusionsMedical application AI models need additional development because performance results show major differences between medical specialties.

Authors

  • Nawaf AlShahwan
    Trauma and Acute Care Surgery Unit, Department of Surgery, College of Medicine, King Saud University, Riyadh, Saudi Arabia.
  • Ibrahim Majed Fetyani
    Department of Surgery, King Saud University, Riyadh, Saudi Arabia.
  • Mohammed Basem Beyari
    College of Medicine, King Saud University, Riyadh, Saudi Arabia.
  • Saleh Husam Aldeligan
    College of Medicine, King Saud University, Riyadh, Saudi Arabia.
  • Maram Basem Beyari
    College of Medicine, King Saud University, Riyadh, Saudi Arabia.
  • Rayan Saleh Alshehri
    College of Medicine, King Saud University, Riyadh, Saudi Arabia.
  • Ahmed Alburakan
    Trauma and Acute Care Surgery Unit, Department of Surgery, College of Medicine, King Saud University, Riyadh, Saudi Arabia.
  • Hassan Mashbari
    Department of Surgery, King Saud University, Riyadh, Saudi Arabia.
  • Abdulaziz AlKanhal
    Trauma and Acute Care Surgery Unit, Department of Surgery, College of Medicine, King Saud University, Riyadh, Saudi Arabia.
  • Thamer Nouh
    Trauma and Acute Care Surgery Unit, College of Medicine, King Saud University, Riyadh 12271, Saudi Arabia.

Keywords

No keywords available for this article.