Large language models leverage external knowledge to extend clinical insight beyond language boundaries.

Journal: Journal of the American Medical Informatics Association : JAMIA
Published Date:

Abstract

OBJECTIVES: Large Language Models (LLMs) such as ChatGPT and Med-PaLM have excelled in various medical question-answering tasks. However, these English-centric models encounter challenges in non-English clinical settings, primarily due to limited clinical knowledge in respective languages, a consequence of imbalanced training corpora. We systematically evaluate LLMs in the Chinese medical context and develop a novel in-context learning framework to enhance their performance.

Authors

  • Jiageng Wu
    School of Public Health, Zhejiang University School of Medicine, Zhejiang, China.
  • Xian Wu
    Beijing University of Posts and Telecommunications, Beijing 100876, China.
  • Zhaopeng Qiu
    Jarvis Research Center, Tencent YouTu Lab, Beijing, 100101, China.
  • Minghui Li
    MOE Key Laboratory of Geriatric Diseases and Immunology, School of Biology and Basic Medical Sciences, Suzhou Medical College of Soochow University, Suzhou, Jiangsu Province 215123, China.
  • Shixu Lin
    School of Public Health, Zhejiang University School of Medicine, Hangzhou, Zhejiang, 310058, China.
  • Yingying Zhang
    Laboratory of Pharmacology, Institute of Basic Research in Clinical Medicine, China Academy of Chinese Medical Sciences, Beijing 100700, P.R. China.
  • Yefeng Zheng
  • Changzheng Yuan
    School of Public Health, Zhejiang University School of Medicine, Hangzhou, 310058, China.
  • Jie Yang
    Key Laboratory of Development and Maternal and Child Diseases of Sichuan Province, Department of Pediatrics, Sichuan University, Chengdu, China.