Interactive Explainable Deep Learning Model for Hepatocellular Carcinoma Diagnosis at Gadoxetic Acid-enhanced MRI: A Retrospective, Multicenter, Diagnostic Study.

Journal: Radiology. Imaging cancer
Published Date:

Abstract

Purpose To develop an artificial intelligence (AI) model based on gadoxetic acid-enhanced MRI to assist radiologists in hepatocellular carcinoma (HCC) diagnosis. Materials and Methods This retrospective study included patients with focal liver lesions (FLLs) who underwent gadoxetic acid-enhanced MRI between January 2015 and December 2021. All hepatic malignancies were diagnosed pathologically, whereas benign lesions were confirmed with pathologic findings or imaging follow-up. Five manually labeled bounding boxes for each FLL obtained from precontrast T1-weighted, T2-weighted, arterial phase, portal venous phase, and hepatobiliary phase images were included. The lesion classifier component, used to distinguish HCC from non-HCC, was trained and externally tested. The feature classifier, based on a post hoc algorithm, inferred the presence of the Liver Imaging Reporting and Data System (LI-RADS) features by analyzing activation patterns of the pretrained lesion classifier. Two radiologists categorized FLLs in the external testing dataset according to LI-RADS criteria. Diagnostic performance of the AI model and the model's impact on reader accuracy were assessed. Results The study included 839 patients (mean age, 51 years ± 12 [SD]; 681 male) with 1023 FLLs (594 HCCs and 429 non-HCCs). The AI model yielded area under the receiver operating characteristic curves of 0.98 and 0.97 in the training set and external testing set, respectively. Compared with LI-RADS category 5, the AI model showed higher sensitivity (91.6% vs 74.8%; < .001) and similar specificity (90.7% vs 96.0%; = .22). The two readers identified more LI-RADS major features and more accurately classified category LR-5 lesions when assisted versus unassisted by AI, with higher sensitivities (reader 1, 85.7% vs 72.3%; < .001; reader 2, 89.1% vs 74.0%; < .001) and the same specificities (reader 1, 93.3% vs reader 2, 94.7%; > .99 for both). Conclusion The AI model accurately diagnosed HCC and improved the radiologists' diagnostic performance. Artificial Intelligence, Deep Learning, MRI, Hepatocellular Carcinoma © RSNA, 2025 See also commentary by Singh et al in this issue.

Authors

  • Mingkai Li
    From the Department of Gastroenterology, The Third Affiliated Hospital of Sun Yat-sen University, No. 600 Tianhe Rd, Guangzhou 510000, China.
  • Zhi Zhang
    National Engineering Research Center for Beijing Biochip Technology, Beijing, China.
  • Zebin Chen
    Department of Pharmacy, Shenzhen Children's Hospital, Shenzhen, 518036, People's Republic of China.
  • Xi Chen
    Department of Critical care medicine, Shenzhen Hospital, Southern Medical University, Guangdong, Shenzhen, China.
  • Huaqing Liu
    Artificial Intelligence Innovation Center, Research Institute of Tsinghua, Pearl River Delta, Guangzhou, 510735, China.
  • Yuanqiang Xiao
    From the Departments of Radiology (M. Li, C.L., A.L., L.Z., Jinhui Zhou, D.Z., H.C., Y.X., J.W.) and Pathology (Jing Zhou), The Third Affiliated Hospital, Sun Yat-Sen University, No. 600 Tianhe Rd, Guangzhou, Guangdong, 510630, People's Republic of China; Medical AI Laboratory, School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen, People's Republic of China (Y.F., B.H.); Department of Radiology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, Guangdong, People's Republic of China (H.Y.); Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, People's Republic of China (M. Luo); and Department of Clinical Science, Philips Healthcare China, Shanghai, People's Republic of China (X.Y., W.D., Z.Z.).
  • Haimei Chen
    From the Departments of Radiology (M. Li, C.L., A.L., L.Z., Jinhui Zhou, D.Z., H.C., Y.X., J.W.) and Pathology (Jing Zhou), The Third Affiliated Hospital, Sun Yat-Sen University, No. 600 Tianhe Rd, Guangzhou, Guangdong, 510630, People's Republic of China; Medical AI Laboratory, School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen, People's Republic of China (Y.F., B.H.); Department of Radiology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, Guangdong, People's Republic of China (H.Y.); Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, People's Republic of China (M. Luo); and Department of Clinical Science, Philips Healthcare China, Shanghai, People's Republic of China (X.Y., W.D., Z.Z.).
  • Xiaodan Zong
    Department of Radiology, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China.
  • Jingbiao Chen
    Department of Radiology, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China.
  • Jianning Chen
    Department of Pathology, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China.
  • Xinying Wang
    Institute of Artificial Intelligence and Marine Robotics, School of Marine Electrical Engineering, Dalian Maritime University, Dalian, 116026, China. Electronic address: wxy1202@dlmu.edu.cn.
  • Xuehong Xiao
    Department of Radiology, Zhongshan City People's Hospital, Zhongshan, China.
  • Zhiwei Yang
  • Lanqing Han
    Artificial Intelligence Innovation Center, Research Institute of Tsinghua, Pearl River Delta, Guangzhou, China.
  • Jin Wang
    Cells Vision (Guangzhou) Medical Technology Inc., Guangzhou, China. Electronic address: wangjin@cellsvision.com.
  • Bin Wu
    Department of Psychiatry, Xi'an Mental Health Center, Xi'an, China.