Adversarial learning for mono- or multi-modal registration.

Journal: Medical image analysis
Published Date:

Abstract

This paper introduces an unsupervised adversarial similarity network for image registration. Unlike existing deep learning registration methods, our approach can train a deformable registration network without the need of ground-truth deformations and specific similarity metrics. We connect a registration network and a discrimination network with a deformable transformation layer. The registration network is trained with the feedback from the discrimination network, which is designed to judge whether a pair of registered images are sufficiently similar. Using adversarial training, the registration network is trained to predict deformations that are accurate enough to fool the discrimination network. The proposed method is thus a general registration framework, which can be applied for both mono-modal and multi-modal image registration. Experiments on four brain MRI datasets and a multi-modal pelvic image dataset indicate that our method yields promising registration performance in accuracy, efficiency and generalizability compared with state-of-the-art registration methods, including those based on deep learning.

Authors

  • Jingfan Fan
    Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA.
  • Xiaohuan Cao
    School of Automation, Northwestern Polytechnical University, Xi'an, China.
  • Qian Wang
    Department of Radiation Oncology, China-Japan Union Hospital of Jilin University, Changchun, China.
  • Pew-Thian Yap
    Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA.
  • Dinggang Shen
    School of Biomedical Engineering, ShanghaiTech University, Shanghai, China.