Analyzing Transfer Learning of Vision Transformers for Interpreting Chest Radiography.

Journal: Journal of digital imaging
Published Date:

Abstract

Limited availability of medical imaging datasets is a vital limitation when using "data hungry" deep learning to gain performance improvements. Dealing with the issue, transfer learning has become a de facto standard, where a pre-trained convolution neural network (CNN), typically on natural images (e.g., ImageNet), is finetuned on medical images. Meanwhile, pre-trained transformers, which are self-attention-based models, have become de facto standard in natural language processing (NLP) and state of the art in image classification due to their powerful transfer learning abilities. Inspired by the success of transformers in NLP and image classification, large-scale transformers (such as vision transformer) are trained on natural images. Based on these recent developments, this research aims to explore the efficacy of pre-trained natural image transformers for medical images. Specifically, we analyze pre-trained vision transformer on CheXpert and pediatric pneumonia dataset. We use CNN standard models including VGGNet and ResNet as baseline models. By examining the acquired representations and results, we discover that transfer learning from the pre-trained vision transformer shows improved results as compared to pre-trained CNN which demonstrates a greater transfer ability of the transformers in medical imaging.

Authors

  • Mohammad Usman
    Department of Computer Science, COMSATS University Islamabad (CUI), Islamabad, Pakistan.
  • Tehseen Zia
    Department of Computer Science, COMSATS University, Islamabad, Pakistan.
  • Ali Tariq
    Department of Computer Science, COMSATS University Islamabad (CUI), Islamabad, Pakistan.