Fine-Tuning an Existing Large Language Model with Knowledge from the Medical Expert System Hepaxpert.
Journal:
Studies in health technology and informatics
Published Date:
May 15, 2025
Abstract
The analysis and individual interpretation of hepatitis serology test results is a complex task in laboratory medicine, requiring either experienced physicians or specialized expert systems. This study explores fine-tuning a large language model (LLM) for hepatitis serology interpretation using a single graphics processing unit (GPU). A custom dataset based on the Hepaxpert expert system was used to train the LLM. Fine-tuning was performed on an Nvidia RTX 6000 Ada GPU via torchtune. The fine-tuned LLM showed significant performance improvements over the base model when compared to Hepaxpert using the METEOR algorithm. The findings highlight the potential of LLMs in enhancing medical expert systems as well as the significance of domain-specific fine-tuning.