MinT: Magnetic resonance image unsupervised translation via decoupling anatomical structure and contrast.
Journal:
Computers in biology and medicine
Published Date:
Jul 22, 2025
Abstract
Unsupervised image-to-image translation, which synthesizes new images from existing ones, has become a prominent research topic in computer vision. This technique is particularly valuable in the magnetic resonance (MR) imaging domain, where acquiring multi-contrast images is often challenging. In this study, we propose an unsupervised MR image-to-image translation method that synthesizes new contrast images from acquired images. The proposed method is based on style transfer and exploits the characteristics of multi-contrast MR images, which share identical anatomical structures but differ in contrast. Our network comprises an encoder that disentangles anatomical features and contrast representations, a generator that synthesizes target contrast images, and a discriminator that distinguishes real from synthesized images. The network is trained using novel loss functions to achieve MR image synthesis without distortion. The loss functions are specifically designed to preserve anatomical structure and enable accurate contrast translation. Quantitative and qualitative evaluations demonstrate that our method outperforms existing image-to-image translation approaches. Furthermore, our experiments demonstrate that the proposed method is generalized across various anatomical regions and contrast types.