Deformation-Compensated Learning for Image Reconstruction Without Ground Truth.

Journal: IEEE transactions on medical imaging
Published Date:

Abstract

Deep neural networks for medical image reconstruction are traditionally trained using high-quality ground-truth images as training targets. Recent work on Noise2Noise (N2N) has shown the potential of using multiple noisy measurements of the same object as an alternative to having a ground-truth. However, existing N2N-based methods are not suitable for learning from the measurements of an object undergoing nonrigid deformation. This paper addresses this issue by proposing the deformation-compensated learning (DeCoLearn) method for training deep reconstruction networks by compensating for object deformations. A key component of DeCoLearn is a deep registration module, which is jointly trained with the deep reconstruction network without any ground-truth supervision. We validate DeCoLearn on both simulated and experimentally collected magnetic resonance imaging (MRI) data and show that it significantly improves imaging quality.

Authors

  • Weijie Gan
    Department of Computer Science & Engineering.
  • Yu Sun
    Department of Neurology, China-Japan Friendship Hospital, Beijing, China.
  • Cihat Eldeniz
    Department of Radiology, Washington University School of Medicine, Saint Louis, Missouri.
  • Jiaming Liu
    Department of Electrical and Systems Engineering, University in St. Louis, St. Louis, MO, USA.
  • Hongyu An
    Department of Radiology, Washington University School of Medicine, Saint Louis, Missouri.
  • Ulugbek S Kamilov
    Department of Computer Science and Engineering, Washington University in St. Louis, St. Louis, MO, USA.