Model tuning or prompt Tuning? a study of large language models for clinical concept and relation extraction.

Journal: Journal of biomedical informatics
Published Date:

Abstract

OBJECTIVE: To develop soft prompt-based learning architecture for large language models (LLMs), examine prompt-tuning using frozen/unfrozen LLMs, and assess their abilities in transfer learning and few-shot learning.

Authors

  • Cheng Peng
    School of Electrical and Mechanical Engineering, Hefei Technology College, Hefei, China.
  • Xi Yang
    Department of Health Outcomes and Biomedical Informatics.
  • Kaleb E Smith
    NVIDIA, Santa Clara, CA, USA.
  • Zehao Yu
    Department of Health Outcomes and Biomedical Informatics.
  • Aokun Chen
    Department of Health Outcomes and Biomedical Informatics, College of Medicine, University of Florida, Gainesville, FL, USA; Cancer Informatics Shared Resource, University of Florida Health Cancer Center, Gainesville, FL, USA.
  • Jiang Bian
    Department of Health Outcomes and Biomedical Informatics, College of Medicine, University of Florida, Gainesville, Florida, United States of America.
  • Yonghui Wu
    Department of Health Outcomes and Biomedical Informatics.