3D multi-scale, multi-task, and multi-label deep learning for prediction of lymph node metastasis in T1 lung adenocarcinoma patients' CT images.

Journal: Computerized medical imaging and graphics : the official journal of the Computerized Medical Imaging Society
Published Date:

Abstract

The diagnosis of preoperative lymph node (LN) metastasis is crucial to evaluate possible therapy options for T1 lung adenocarcinoma patients. Radiologists preoperatively diagnose LN metastasis by evaluating signs related to LN metastasis, like spiculation or lobulation of pulmonary nodules in CT images. However, this type of evaluation is subjective and time-consuming, which may result in poor consistency and low efficiency of diagnoses. In this study, a 3D Multi-scale, Multi-task, and Multi-label classification network (3M-CN) was proposed to predict LN metastasis, as well as evaluate multiple related signs of pulmonary nodules in order to improve the accuracy of LN metastasis prediction. The following key approaches were adapted for this method. First, a multi-scale feature fusion module was proposed to aggregate the features from different levels for which different labels be best modeled at different levels; second, an auxiliary segmentation task was applied to force the model to focus more on the nodule region and less on surrounding unrelated structures; and third, a cross-modal integration module called the refine layer was designed to integrate the related risk factors into the model to further improve its confidence level. The 3M-CN was trained using data from 401 cases and then validated on both internal and external datasets, which consisted of 100 cases and 53 cases, respectively. The proposed 3M-CN model was then compared with existing state-of-the-art methods for prediction of LN metastasis. The proposed model outperformed other methods, achieving the best performance with AUCs of 0.945 and 0.948 in the internal and external test datasets, respectively. The proposed model not only obtain strong generalization, but greatly enhance the interpretability of the deep learning model, increase doctors' confidence in the model results, conform to doctors' diagnostic process, and may also be transferable to the diagnosis of other diseases.

Authors

  • Xingyu Zhao
    University of Science and Technology of China, Hefei, China; Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China.
  • Xiang Wang
    Department of Thoracic Surgery, The Second Xiangya Hospital of Central South University, Changsha, Hunan, China.
  • Wei Xia
    Key Laboratory of Environment and Health, Ministry of Education & Ministry of Environmental Protection, State Key Laboratory of Environmental Health (Incubation), School of Public Health, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China.
  • Rui Zhang
    Department of Cardiology, Zhongda Hospital, Medical School of Southeast University, Nanjing, China.
  • Junming Jian
    University of Science and Technology of China, Hefei, Anhui, 230026, China.
  • Jiayi Zhang
    School of Basic Medical Sciences, Health Science Center, Ningbo University, Ningbo, China.
  • Yechen Zhu
    Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China.
  • Yuguo Tang
    Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China.
  • Zhen Li
    PepsiCo R&D, Valhalla, NY, United States.
  • Shiyuan Liu
    Department of Radiology, Changzheng Hospital of the Navy Medical University, Shanghai, China. Electronic address: liushiyuan@smmu.edu.cn.
  • Xin Gao
    Department of Computer Science, New Jersey Institute of Technology, Newark, New Jersey, USA.