Quaternion Cross-Modality Spatial Learning for Multi-Modal Medical Image Segmentation.

Journal: IEEE journal of biomedical and health informatics
PMID:

Abstract

Recently, the Deep Neural Networks (DNNs) have had a large impact on imaging process including medical image segmentation, and the real-valued convolution of DNN has been extensively utilized in multi-modal medical image segmentation to accurately segment lesions via learning data information. However, the weighted summation operation in such convolution limits the ability to maintain spatial dependence that is crucial for identifying different lesion distributions. In this paper, we propose a novel Quaternion Cross-modality Spatial Learning (Q-CSL) which explores the spatial information while considering the linkage between multi-modal images. Specifically, we introduce to quaternion to represent data and coordinates that contain spatial information. Additionally, we propose Quaternion Spatial-association Convolution to learn the spatial information. Subsequently, the proposed De-level Quaternion Cross-modality Fusion (De-QCF) module excavates inner space features and fuses cross-modality spatial dependency. Our experimental results demonstrate that our approach compared to the competitive methods perform well with only 0.01061 M parameters and 9.95G FLOPs.

Authors

  • Junyang Chen
    College of Computer Science and Software Engineering and the Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ), Shenzhen University, Shenzhen, China.
  • Guoheng Huang
  • Xiaochen Yuan
    Faculty of Information Technology, Macau University of Science and Technology, Taipa, Macau. Electronic address: xcyuan@must.edu.mo.
  • Guo Zhong
  • Zewen Zheng
  • Chi-Man Pun
    Department of Computer and Information Science, University of Macau, Macau. Electronic address: cmpun@umac.mo.
  • Jian Zhu
  • Zhixin Huang