What Kind of Transformer Models to Use for the ICD-10 Codes Classification Task.

Journal: Studies in health technology and informatics
PMID:

Abstract

Coding according to the International Classification of Diseases (ICD)-10 and its clinical modifications (CM) is inherently complex and expensive. Natural Language Processing (NLP) assists by simplifying the analysis of unstructured data from electronic health records, thereby facilitating diagnosis coding. This study investigates the suitability of transformer models for ICD-10 classification, considering both encoder and encoder-decoder architectures. The analysis is performed on clinical discharge summaries from the Medical Information Mart for Intensive Care (MIMIC)-IV dataset, which contains an extensive collection of electronic health records. Pre-trained models such as BioBERT, ClinicalBERT, ClinicalLongformer, and ClinicalBigBird are adapted for the coding task, incorporating specific preprocessing techniques to enhance performance. The findings indicate that increasing context length improves accuracy, and that the difference in accuracy between encoder and encoder-decoder models is negligible.

Authors

  • Mariem Mansour
    Bern University of Applied Sciences, Switzerland.
  • Fatma Yilmaz
    Bern University of Applied Sciences, Switzerland.
  • Marko Miletic
    Bern University of Applied Sciences, Switzerland.
  • Murat Sariyar
    Bern University of Appl. Sciences, Department of Medical Informatics, Switzerland.