Lazy Resampling: Fast and information preserving preprocessing for deep learning.

Journal: Computer methods and programs in biomedicine
Published Date:

Abstract

BACKGROUND AND OBJECTIVE: Preprocessing of data is a vital step for almost all deep learning workflows. In computer vision, manipulation of data intensity and spatial properties can improve network stability and can provide an important source of generalisation for deep neural networks. Models are frequently trained with preprocessing pipelines composed of many stages, but these pipelines come with a drawback; each stage that resamples the data costs time, degrades image quality, and adds bias to the output. Long pipelines can also be complex to design, especially in medical imaging, where cropping data early can cause significant artifacts.

Authors

  • Benjamin Murray
    School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK.
  • Richard Brown
    School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK.
  • Pengcheng Ma
    School of Computer Science and Technology, Qilu University of Technology (Shandong Academy of Sciences), Jinan, Shandong 250353, China.
  • Eric Kerfoot
    School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK.
  • Daguang Xu
    NVIDIA, Santa Clara, CA, USA.
  • Andrew Feng
    NVIDIA, Santa Clara, CA, USA.
  • Jorge Cardoso
    School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom.
  • Sébastien Ourselin
    Wellcome / EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, UK.
  • Marc Modat
    Centre for Medical Image Computing (CMIC), Departments of Medical Physics & Biomedical Engineering and Computer Science, University College London, UK.