Exploring the effectiveness of instruction tuning in biomedical language processing.

Journal: Artificial intelligence in medicine
Published Date:

Abstract

Large Language Models (LLMs), particularly those similar to ChatGPT, have significantly influenced the field of Natural Language Processing (NLP). While these models excel in general language tasks, their performance in domain-specific downstream tasks such as biomedical and clinical Named Entity Recognition (NER), Relation Extraction (RE), and Medical Natural Language Inference (NLI) is still evolving. In this context, our study investigates the potential of instruction tuning for biomedical language processing, applying this technique to two general LLMs of substantial scale. We present a comprehensive, instruction-based model trained on a dataset that consists of approximately 200,000 instruction-focused samples. This dataset represents a carefully curated compilation of existing data, meticulously adapted and reformatted to align with the specific requirements of our instruction-based tasks. This initiative represents an important step in utilising such models to achieve results on par with specialised encoder-only models like BioBERT and BioClinicalBERT for various classical biomedical NLP tasks. Our work includes an analysis of the dataset's composition and its impact on model performance, providing insights into the intricacies of instruction tuning. By sharing our codes, models, and the distinctively assembled instruction-based dataset, we seek to encourage ongoing research and development in this area..

Authors

  • Omid Rohanian
    Department of Engineering Science, University of Oxford, Oxford, UK.
  • Mohammadmahdi Nouriborji
    NLPie Research, Oxford, UK.
  • Samaneh Kouchaki
    Surrey Institute for People-Centred Artificial Intelligence, University of Surrey, Guildford GU2 7XH, Surrey, UK.
  • Farhad Nooralahzadeh
    University of Zürich and University Hospital of Zürich, Zürich, Switzerland.
  • Lei Clifton
    Nuffield Department of Population Health, University of Oxford, Oxford, England.
  • David A Clifton