SynMSE: A multimodal similarity evaluator for complex distribution discrepancy in unsupervised deformable multimodal medical image registration.
Journal:
Medical image analysis
Published Date:
Apr 22, 2025
Abstract
Unsupervised deformable multimodal medical image registration often confronts complex scenarios, which include intermodality domain gaps, multi-organ anatomical heterogeneity, and physiological motion variability. These factors introduce substantial grayscale distribution discrepancies, hindering precise alignment between different imaging modalities. However, existing methods have not been sufficiently adapted to meet the specific demands of registration in such complex scenarios. To overcome the above challenges, we propose SynMSE, a novel multimodal similarity evaluator that can be seamlessly integrated as a plug-and-play module in any registration framework to serve as the similarity metric. SynMSE is trained using random transformations to simulate spatial misalignments and a structure-constrained generator to model grayscale distribution discrepancies. By emphasizing spatial alignment and mitigating the influence of complex distributional variations, SynMSE effectively addresses the aforementioned issues. Extensive experiments on the Learn2Reg 2022 CT-MR abdomen dataset, the clinical cervical CT-MR dataset, and the CuRIOUS MR-US brain dataset demonstrate that SynMSE achieves state-of-the-art performance. Our code is available on the project page https://github.com/MIXAILAB/SynMSE.