Cross-dimensional transfer learning in medical image segmentation with deep learning.

Journal: Medical image analysis
Published Date:

Abstract

Over the last decade, convolutional neural networks have emerged and advanced the state-of-the-art in various image analysis and computer vision applications. The performance of 2D image classification networks is constantly improving and being trained on databases made of millions of natural images. Conversely, in the field of medical image analysis, the progress is also remarkable but has mainly slowed down due to the relative lack of annotated data and besides, the inherent constraints related to the acquisition process. These limitations are even more pronounced given the volumetry of medical imaging data. In this paper, we introduce an efficient way to transfer the efficiency of a 2D classification network trained on natural images to 2D, 3D uni- and multi-modal medical image segmentation applications. In this direction, we designed novel architectures based on two key principles: weight transfer by embedding a 2D pre-trained encoder into a higher dimensional U-Net, and dimensional transfer by expanding a 2D segmentation network into a higher dimension one. The proposed networks were tested on benchmarks comprising different modalities: MR, CT, and ultrasound images. Our 2D network ranked first on the CAMUS challenge dedicated to echo-cardiographic data segmentation and surpassed the state-of-the-art. Regarding 2D/3D MR and CT abdominal images from the CHAOS challenge, our approach largely outperformed the other 2D-based methods described in the challenge paper on Dice, RAVD, ASSD, and MSSD scores and ranked third on the online evaluation platform. Our 3D network applied to the BraTS 2022 competition also achieved promising results, reaching an average Dice score of 91.69% (91.22%) for the whole tumor, 83.23% (84.77%) for the tumor core and 81.75% (83.88%) for enhanced tumor using the approach based on weight (dimensional) transfer. Experimental and qualitative results illustrate the effectiveness of our methods for multi-dimensional medical image segmentation.

Authors

  • Hicham Messaoudi
    Laboratory of Medical Informatics (LIMED), Faculty of Technology, University of Bejaia, 06000 Bejaia, Algeria. Electronic address: hicham.messaoudi@univ-bejaia.dz.
  • Ahror Belaid
    Laboratory of Medical Informatics (LIMED), Faculty of Exact Sciences, University of Bejaia, 06000 Bejaia, Algeria; Data Science & Applications Research Unit - CERIST, 06000, Bejaia, Algeria.
  • Douraied Ben Salem
    Neuroradiology, University Hospital of Brest, boulevard Tanguy-Prigent, 29609 Brest cedex, France; Laboratory of medical information processing - LaTIM, Inserm UMR 1101, CS 93837, Université de Bretagne Occidentale, 22, avenue Camille-Desmoulins, 29238 Brest cedex 3, France. Electronic address: douraied.bensalem@chu-brest.fr.
  • Pierre-Henri Conze
    Inserm, UMR 1101, Brest F-29200, France; Institut Mines-Télécom Atlantique, Brest F-29200, France.