The utility of generative artificial intelligence Chatbot (ChatGPT) in generating teaching and learning material for anesthesiology residents.

Journal: Frontiers in artificial intelligence
Published Date:

Abstract

The popularization of large language chatbots such as ChatGPT has led to growing utility in various biomedical fields. It has been shown that chatbots can provide reasonably accurate responses to medical exam style questions. On the other hand, chatbots have known limitations which may hinder their utility in medical education. We conducted a pragmatically designed study to evaluate the accuracy and completeness of ChatGPT generated responses to various styles of prompts, based on entry-level anesthesiology topics. Ninety-five unique prompts were constructed using topics from the Anesthesia Knowledge Test 1 (AKT-1), a standardized exam undertaken by US anesthesiology residents after 1 month of specialty training. A combination of focused and open-ended prompts was used to evaluate its ability to present and organize information. We also included prompts for journal references, lecture outlines, as well as biased (medically inaccurate) prompts. The responses were independently scored using a 3-point Likert scale, by two board-certified anesthesiologists with extensive experience in medical education. Fifty-two (55%) responses were rated as completely accurate by both evaluators. For longer responses prompts, most of the responses were also deemed complete. Notably, the chatbot frequently generated inaccurate responses when asked for specific literature references and when the input prompt contained deliberate errors (biased prompts). Another recurring observation was the conflation of adjacent concepts (e.g., a specific characteristic was attributed to the wrong drug under the same pharmacological class). Some of the inaccuracies could potentially result in significant harm if applied to clinical situations. While chatbots such as ChatGPT can generate medically accurate responses in most cases, its reliability is not yet suited for medical and clinical education. Content generated by ChatGPT and other chatbots will require validation prior to use.

Authors

  • Zhaosheng Jin
    Department of Anesthesiology, Stony Brook University Hospital, Stony Brook, New York, NY, United States.
  • Ramon Abola
    Department of Anesthesiology, Stony Brook University Hospital, Stony Brook, New York, NY, United States.
  • Vincent Bargnes
    Department of Anesthesiology, Stony Brook University Hospital, Stony Brook, New York, NY, United States.
  • Alexandra Tsivitis
    Department of Anesthesiology, Stony Brook University Hospital, Stony Brook, New York, NY, United States.
  • Sadiq Rahman
    Department of Anesthesiology, Stony Brook University Hospital, Stony Brook, New York, NY, United States.
  • Jonathon Schwartz
    Department of Anesthesiology, Stony Brook University Hospital, Stony Brook, New York, NY, United States.
  • Sergio D Bergese
    Department of Anesthesiology, Stony Brook University Hospital, Stony Brook, New York, NY, United States.
  • Joy E Schabel
    Department of Anesthesiology, Stony Brook University Hospital, Stony Brook, New York, NY, United States.

Keywords

No keywords available for this article.