Meta multi-task nuclei segmentation with fewer training samples.

Journal: Medical image analysis
Published Date:

Abstract

Cells/nuclei deliver massive information of microenvironment. An automatic nuclei segmentation approach can reduce pathologists' workload and allow precise of the microenvironment for biological and clinical researches. Existing deep learning models have achieved outstanding performance under the supervision of a large amount of labeled data. However, when data from the unseen domain comes, we still have to prepare a certain degree of manual annotations for training for each domain. Unfortunately, obtaining histopathological annotations is extremely difficult. It is high expertise-dependent and time-consuming. In this paper, we attempt to build a generalized nuclei segmentation model with less data dependency and more generalizability. To this end, we propose a meta multi-task learning (Meta-MTL) model for nuclei segmentation which requires fewer training samples. A model-agnostic meta-learning is applied as the outer optimization algorithm for the segmentation model. We introduce a contour-aware multi-task learning model as the inner model. A feature fusion and interaction block (FFIB) is proposed to allow feature communication across both tasks. Extensive experiments prove that our proposed Meta-MTL model can improve the model generalization and obtain a comparable performance with state-of-the-art models with fewer training samples. Our model can also perform fast adaptation on the unseen domain with only a few manual annotations. Code is available at https://github.com/ChuHan89/Meta-MTL4NucleiSegmentation.

Authors

  • Chu Han
  • Huasheng Yao
    Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, Guangdong 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou 510080, China.
  • Bingchao Zhao
    The School of Computer Science and Engineering, South China University of Technology, Guangzhou, Guangdong, 510006, China; Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, Guangdong, 510080, China.
  • Zhenhui Li
    College of Information Sciences and Technology, Pennsylvania State University.
  • Zhenwei Shi
    Department of Radiation Oncology (MAASTRO), GROW - School for Oncology and Development Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands.
  • Lei Wu
    Advanced Photonics Center, Southeast University, Nanjing, 210096, China.
  • Xin Chen
    University of Nottingham, Nottingham, United Kingdom.
  • Jinrong Qu
    Department of Radiology, The Affiliated Cancer Hospital of Zhengzhou University & Henan Cancer Hospital, Zhengzhou 450008, China.
  • Ke Zhao
    Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China.
  • Rushi Lan
    Guangxi Colleges and Universities Key Laboratory of Intelligent Processing of Computer Image and Graphics, Guilin University of Electronic Technology, Guilin, Guangxi, China.
  • Changhong Liang
    Department of Radiology, Guangdong General Hospital, Guangdong Academy of Medical Sciences, 106 Zhongshan Er Road, Guangzhou, 510080, China.
  • Xipeng Pan
    Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, Guangdong 510080, China; Guangdong Cardiovascular Institute, Guangzhou, Guangdong 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, Guangdong 510080, China.
  • Zaiyi Liu
    Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China.