Lazy Resampling: Fast and information preserving preprocessing for deep learning.
Journal:
Computer methods and programs in biomedicine
Published Date:
Sep 19, 2024
Abstract
BACKGROUND AND OBJECTIVE: Preprocessing of data is a vital step for almost all deep learning workflows. In computer vision, manipulation of data intensity and spatial properties can improve network stability and can provide an important source of generalisation for deep neural networks. Models are frequently trained with preprocessing pipelines composed of many stages, but these pipelines come with a drawback; each stage that resamples the data costs time, degrades image quality, and adds bias to the output. Long pipelines can also be complex to design, especially in medical imaging, where cropping data early can cause significant artifacts.