Improving chest X-ray report generation by leveraging warm starting.

Journal: Artificial intelligence in medicine
Published Date:

Abstract

Automatically generating a report from a patient's Chest X-rays (CXRs) is a promising solution to reducing clinical workload and improving patient care. However, current CXR report generators-which are predominantly encoder-to-decoder models-lack the diagnostic accuracy to be deployed in a clinical setting. To improve CXR report generation, we investigate warm starting the encoder and decoder with recent open-source computer vision and natural language processing checkpoints, such as the Vision Transformer (ViT) and PubMedBERT. To this end, each checkpoint is evaluated on the MIMIC-CXR and IU X-ray datasets. Our experimental investigation demonstrates that the Convolutional vision Transformer (CvT) ImageNet-21K and the Distilled Generative Pre-trained Transformer 2 (DistilGPT2) checkpoints are best for warm starting the encoder and decoder, respectively. Compared to the state-of-the-art (M Transformer Progressive), CvT2DistilGPT2 attained an improvement of 8.3% for CE F-1, 1.8% for BLEU-4, 1.6% for ROUGE-L, and 1.0% for METEOR. The reports generated by CvT2DistilGPT2 have a higher similarity to radiologist reports than previous approaches. This indicates that leveraging warm starting improves CXR report generation. Code and checkpoints for CvT2DistilGPT2 are available at https://github.com/aehrc/cvt2distilgpt2.

Authors

  • Aaron Nicolson
    Australian e-Health Research Centre, Commonwealth Scientific and Industrial Research Organisation, Herston, Queensland, 4006, Australia.
  • Jason Dowling
    Australian e-Health Research Centre, CSIRO, Digital Productivity Flagship.
  • Bevan Koopman
    Australian e-Health Research Centre, CSIRO, Brisbane, QLD, Australia; Queensland University of Technology, Brisbane, QLD, Australia.