CBCRnet: A Contrastive Learning-based Multi-modal Image Registration Via Bidirectional Cross-modal Attention.

Journal: Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference
Published Date:

Abstract

In the past few years, convolutional neural networks (CNNs) have been a major focus in medical image registration. However, it has been proved that CNNs are limited in their ability to represent modal-independent feature and understand the spatial correspondence between different modalities. Therefore, we present CBCRnet for the effective feature representation and correspondence. 1) We propose a novel contrast-reconstruction tasks guided pretraining method for modal-independent feature learning and the unaligned image pairs can be directly imported for pretraining. 2) We propose a bidirectional cross modal attention module to capture the explicit spatial correspondence.Clinical Relevance- Multi-modal deformable medical image registration has many applications in diagnostic medical imaging, organ mapping and surgical navigation [1], such as ablation surgery guided by intraprocedural CT and preoperative MR. Therefore, multi-modal deformable image registration is important in its clinical applications.

Authors

  • Yin Pengfei
  • Hu Jisu
  • Qian Xusheng
  • Dai Yakang
    Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, 88 Keling Road, Suzhou, 215163, China. daiyk@sibet.ac.cn.
  • Zhou Zhiyong