Fine-Tuning an Existing Large Language Model with Knowledge from the Medical Expert System Hepaxpert.

Journal: Studies in health technology and informatics
Published Date:

Abstract

The analysis and individual interpretation of hepatitis serology test results is a complex task in laboratory medicine, requiring either experienced physicians or specialized expert systems. This study explores fine-tuning a large language model (LLM) for hepatitis serology interpretation using a single graphics processing unit (GPU). A custom dataset based on the Hepaxpert expert system was used to train the LLM. Fine-tuning was performed on an Nvidia RTX 6000 Ada GPU via torchtune. The fine-tuned LLM showed significant performance improvements over the base model when compared to Hepaxpert using the METEOR algorithm. The findings highlight the potential of LLMs in enhancing medical expert systems as well as the significance of domain-specific fine-tuning.

Authors

  • Jakob Kainz
    University of Applied Sciences Vienna, Höchstädtplatz 6, 1200 Vienna, Austria.
  • Philipp Seisl
    Medexter Healthcare, Borschkegasse 7/5, 1090 Vienna, Austria.
  • Moritz Grob
    Medical University of Vienna, Center for Medical Data Science, Institute of Artificial Intelligence, Spitalgasse 23, 1090 Vienna, Austria.
  • Leonhard Hauptfeld
    Medexter Healthcare, Borschkegasse 7/5, 1090 Vienna, Austria.
  • Jonas Wahringer
    Medexter Healthcare, Borschkegasse 7/5, 1090 Vienna, Austria.
  • Andrea Rappelsberger
    Section for Medical Expert and Knowledge-Based Systems, CeMSIIS, Medical University of Vienna, Vienna, Austria.
  • Klaus-Peter Adlassnig
    Section for Medical Expert and Knowledge-Based Systems, CeMSIIS, Medical University of Vienna, Vienna, Austria.