Learning Domain-Agnostic Visual Representation for Computational Pathology Using Medically-Irrelevant Style Transfer Augmentation.

Journal: IEEE transactions on medical imaging
Published Date:

Abstract

Suboptimal generalization of machine learning models on unseen data is a key challenge which hampers the clinical applicability of such models to medical imaging. Although various methods such as domain adaptation and domain generalization have evolved to combat this challenge, learning robust and generalizable representations is core to medical image understanding, and continues to be a problem. Here, we propose STRAP (Style TRansfer Augmentation for histoPathology), a form of data augmentation based on random style transfer from non-medical style sources such as artistic paintings, for learning domain-agnostic visual representations in computational pathology. Style transfer replaces the low-level texture content of an image with the uninformative style of randomly selected style source image, while preserving the original high-level semantic content. This improves robustness to domain shift and can be used as a simple yet powerful tool for learning domain-agnostic representations. We demonstrate that STRAP leads to state-of-the-art performance, particularly in the presence of domain shifts, on two particular classification tasks in computational pathology. Our code is available at https://github.com/rikiyay/style-transfer-for-digital-pathology.

Authors

  • Rikiya Yamashita
    Artera, Inc., Los Altos, CA.
  • Jin Long
    Center for Artificial Intelligence in Medicine and Imaging, Stanford University, 1701 Page Mill Road, Palo Alto, CA, 94304, USA.
  • Snikitha Banda
  • Jeanne Shen
    Center for Artificial Intelligence in Medicine and Imaging, Stanford University, 1701 Page Mill Road, Palo Alto, CA, 94304, USA. jeannes@stanford.edu.
  • Daniel L Rubin
    Department of Biomedical Data Science, Stanford University School of Medicine Medical School Office Building, Stanford CA 94305-5479.