While GPT-3.5 is unable to pass the Physician Licensing Exam in Taiwan, GPT-4 successfully meets the criteria.

Journal: Journal of the Chinese Medical Association : JCMA
PMID:

Abstract

BACKGROUND: This study investigates the performance of ChatGPT-3.5 and ChatGPT-4 in answering medical questions from Taiwan's Physician Licensing Exam, ranging from basic medical knowledge to specialized clinical topics. It aims to understand these artificial intelligence (AI) models' capabilities in a non-English context, specifically traditional Chinese.

Authors

  • Tsung-An Chen
    Department of Family Medicine, Taipei Veterans General Hospital, Taipei, Taiwan, ROC.
  • Kuan-Chen Lin
    Department of Family Medicine, Taipei Veterans General Hospital, Taipei, Taiwan, ROC.
  • Ming-Hwai Lin
    Department of Family Medicine, Taipei Veterans General Hospital, Taipei, Taiwan, ROC.
  • Hsiao-Ting Chang
    Department of Family Medicine, Taipei Veterans General Hospital, Taipei, Taiwan, ROC.
  • Yu-Chun Chen
    School of Medicine, Faculty of Medicine, National Yang-Ming Chiao Tung University, Taipei, Taiwan.
  • Tzeng-Ji Chen
    Department of Family Medicine, Taipei Veterans General Hospital Hsinchu Branch, Hsinchu County, Taiwan, ROC.