PEARL: Cascaded Self-Supervised Cross-Fusion Learning for Parallel MRI Acceleration.

Journal: IEEE journal of biomedical and health informatics
Published Date:

Abstract

Supervised deep learning (SDL) methodology holds promise for accelerated magnetic resonance imaging (AMRI) but is hampered by the reliance on extensive training data. Some self-supervised frameworks, such as deep image prior (DIP), have emerged, eliminating the explicit training procedure but often struggling to remove noise and artifacts under significant degradation. This work introduces a novel self-supervised accelerated parallel MRI approach called PEARL, leveraging a multiple-stream joint deep decoder with two cross-fusion schemes to accurately reconstruct one or more target images from compressively sampled k-space. Each stream comprises cascaded cross-fusion sub-block networks (SBNs) that sequentially perform combined upsampling, 2D convolution, joint attention, ReLU activation and batch normalization (BN). Among them, combined upsampling and joint attention facilitate mutual learning between multiple-stream networks by integrating multi-parameter priors in both additive and multiplicative manners. Long-range unified skip connections within SBNs ensure effective information propagation between distant cross-fusion layers. Additionally, incorporating dual-normalized edge-orientation similarity regularization into the training loss enhances detail reconstruction and prevents overfitting. Experimental results consistently demonstrate that PEARL outperforms the existing state-of-the-art (SOTA) self-supervised AMRI technologies in various MRI cases. Notably, 5-fold$\sim$6-fold accelerated acquisition yields a 1$\%$ $\sim$ 2$\%$ improvement in SSIM$_{\mathsf{ROI}}$ and a 3$\%$ $\sim$ 6$\%$ improvement in PSNR$_{\mathsf{ROI}}$, along with a significant 15$\%$ $\sim$ 20$\%$ reduction in RLNE$_{\mathsf{ROI}}$.

Authors

  • Qingyong Zhu
  • Bei Liu
    College of Mathematics and Physics, Hunan University of Arts and Science, Changde 415000, China.
  • Zhuo-Xu Cui
    Research Center for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.
  • Chentao Cao
    Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.
  • Xiaomeng Yan
  • Yuanyuan Liu
    College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China.
  • Jing Cheng
    Endoscopy Center and Endoscopy Research Institute, Zhongshan Hospital, Fudan University, Shanghai, China.
  • Yihang Zhou
    From the Research Center for Medical Artificial Intelligence (H.W., D.J., Y. Zhou, D.L., Z.L.), Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China.
  • Yanjie Zhu
    Paul C. Lauterbur Research Centre for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Shenzhen, Guangdong, China.
  • Haifeng Wang
    Collaborative Innovation Center of Seafood Deep Processing, Institute of Seafood, Zhejiang Gongshang University, Hangzhou, 310012, China.
  • Hongwu Zeng
    Department of Radiology, Shenzhen Children's Hospital, Shenzhen, Guangdong, People's Republic of China. homerzeng@126.com.
  • Dong Liang
    Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055 China.