Interactive prototype learning and self-learning for few-shot medical image segmentation.

Journal: Artificial intelligence in medicine
Published Date:

Abstract

Few-shot learning alleviates the heavy dependence of medical image segmentation on large-scale labeled data, but it shows strong performance gaps when dealing with new tasks compared with traditional deep learning. Existing methods mainly learn the class knowledge of a few known (support) samples and extend it to unknown (query) samples. However, the large distribution differences between the support image and the query image lead to serious deviations in the transfer of class knowledge, which can be specifically summarized as two segmentation challenges: Intra-class inconsistency and Inter-class similarity, blurred and confused boundaries. In this paper, we propose a new interactive prototype learning and self-learning network to solve the above challenges. First, we propose a deep encoding-decoding module to learn the high-level features of the support and query images to build peak prototypes with the greatest semantic information and provide semantic guidance for segmentation. Then, we propose an interactive prototype learning module to improve intra-class feature consistency and reduce inter-class feature similarity by conducting mid-level features-based mean prototype interaction and high-level features-based peak prototype interaction. Last, we propose a query features-guided self-learning module to separate foreground and background at the feature level and combine low-level feature maps to complement boundary information. Our model achieves competitive segmentation performance on benchmark datasets and shows substantial improvement in generalization ability.

Authors

  • Yuhui Song
    School of Computer Science and Technology, Anhui University, Hefei, China.
  • Chenchu Xu
    From the Department of Radiology, Beijing Anzhen Hospital, Capital Medical University, 2nd Anzhen Road, Chaoyang District, Beijing, China (N.Z., L.X., Z.F.); Cardiovascular Research Centre, Royal Brompton Hospital, London, England (G.Y., R.S., J.K., D.F.); National Heart and Lung Institute, Imperial College London, London, England (G.Y., R.S., J.K., D.F.); Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China (Z.G., H.Z.); Anhui University, Hefei, China (C.X., Y.Z.); and School of Biomedical Engineering, Sun Yat-Sen University, Shenzhen, China (H.Z.).
  • Boyan Wang
    Tsinghua University, Beijing, China. Electronic address: wby000000@mail.tsinghua.edu.cn.
  • Xiuquan Du
    Key Laboratory of Intelligent Computing and Signal Processing of Ministry of Education, ‡School of Computer Science and Technology, and §Center of Information Support & Assurance Technology, Anhui University , Hefei, 230601 Anhui, China.
  • Jie Chen
    School of Basic Medical Sciences, Health Science Center, Ningbo University, Ningbo, China.
  • Yanping Zhang
    Key Laboratory of Intelligent Computing and Signal Processing of Ministry of Education, ‡School of Computer Science and Technology, and §Center of Information Support & Assurance Technology, Anhui University , Hefei, 230601 Anhui, China.
  • Shuo Li
    Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, Shanghai, China.