AI Medical Compendium Topic

Explore the latest research on artificial intelligence and machine learning in medicine.

Licensure, Medical

Showing 1 to 10 of 18 articles

Clear Filters

Performance of single-agent and multi-agent language models in Spanish language medical competency exams.

BMC medical education
BACKGROUND: Large language models (LLMs) like GPT-4o have shown promise in advancing medical decision-making and education. However, their performance in Spanish-language medical contexts remains underexplored. This study evaluates the effectiveness ...

Evaluating the performance of GPT-3.5, GPT-4, and GPT-4o in the Chinese National Medical Licensing Examination.

Scientific reports
This study aims to compare and evaluate the performance of GPT-3.5, GPT-4, and GPT-4o in the 2020 and 2021 Chinese National Medical Licensing Examination (NMLE), exploring their potential value in medical education and clinical applications. Six hund...

Assessing ChatGPT 4.0's Capabilities in the United Kingdom Medical Licensing Examination (UKMLA): A Robust Categorical Analysis.

Scientific reports
Advances in the various applications of artificial intelligence will have important implications for medical training and practice. The advances in ChatGPT-4 alongside the introduction of the medical licensing assessment (MLA) provide an opportunity ...

Benchmarking Vision Capabilities of Large Language Models in Surgical Examination Questions.

Journal of surgical education
OBJECTIVE: Recent studies investigated the potential of large language models (LLMs) for clinical decision making and answering exam questions based on text input. Recent developments of LLMs have extended these models with vision capabilities. These...

Performance of ChatGPT-4 on Taiwanese Traditional Chinese Medicine Licensing Examinations: Cross-Sectional Study.

JMIR medical education
BACKGROUND: The integration of artificial intelligence (AI), notably ChatGPT, into medical education, has shown promising results in various medical fields. Nevertheless, its efficacy in traditional Chinese medicine (TCM) examinations remains underst...

Performance of ChatGPT-4o on the Japanese Medical Licensing Examination: Evalution of Accuracy in Text-Only and Image-Based Questions.

JMIR medical education
This study evaluated the performance of ChatGPT with GPT-4 Omni (GPT-4o) on the 118th Japanese Medical Licensing Examination. The study focused on both text-only and image-based questions. The model demonstrated a high level of accuracy overall, with...

Evaluating the Effectiveness of advanced large language models in medical Knowledge: A Comparative study using Japanese national medical examination.

International journal of medical informatics
UNLABELLED: Study aims and objectives. This study aims to evaluate the accuracy of medical knowledge in the most advanced LLMs (GPT-4o, GPT-4, Gemini 1.5 Pro, and Claude 3 Opus) as of 2024. It is the first to evaluate these LLMs using a non-English m...

Unveiling GPT-4V's hidden challenges behind high accuracy on USMLE questions: Observational Study.

Journal of medical Internet research
BACKGROUND: Recent advancements in artificial intelligence, such as GPT-3.5 Turbo (OpenAI) and GPT-4, have demonstrated significant potential by achieving good scores on text-only United States Medical Licensing Examination (USMLE) exams and effectiv...

While GPT-3.5 is unable to pass the Physician Licensing Exam in Taiwan, GPT-4 successfully meets the criteria.

Journal of the Chinese Medical Association : JCMA
BACKGROUND: This study investigates the performance of ChatGPT-3.5 and ChatGPT-4 in answering medical questions from Taiwan's Physician Licensing Exam, ranging from basic medical knowledge to specialized clinical topics. It aims to understand these a...

Semantic Clinical Artificial Intelligence vs Native Large Language Model Performance on the USMLE.

JAMA network open
IMPORTANCE: Large language models (LLMs) are being implemented in health care. Enhanced accuracy and methods to maintain accuracy over time are needed to maximize LLM benefits.