Deep-Learning-Based Whole-Lung and Lung-Lesion Quantification Despite Inconsistent Ground Truth: Application to Computerized Tomography in SARS-CoV-2 Nonhuman Primate Models.
Journal:
Academic radiology
Published Date:
Sep 1, 2023
Abstract
RATIONALE AND OBJECTIVES: Animal modeling of infectious diseases such as coronavirus disease 2019 (COVID-19) is important for exploration of natural history, understanding of pathogenesis, and evaluation of countermeasures. Preclinical studies enable rigorous control of experimental conditions as well as pre-exposure baseline and longitudinal measurements, including medical imaging, that are often unavailable in the clinical research setting. Computerized tomography (CT) imaging provides important diagnostic, prognostic, and disease characterization to clinicians and clinical researchers. In that context, automated deep-learning systems for the analysis of CT imaging have been broadly proposed, but their practical utility has been limited. Manual outlining of the ground truth (i.e., lung-lesions) requires accurate distinctions between abnormal and normal tissues that often have vague boundaries and is subject to reader heterogeneity in interpretation. Indeed, this subjectivity is demonstrated as wide inconsistency in manual outlines among experts and from the same expert. The application of deep-learning data-science tools has been less well-evaluated in the preclinical setting, including in nonhuman primate (NHP) models of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection/COVID-19, in which the translation of human-derived deep-learning tools is challenging. The automated segmentation of the whole lung and lung lesions provides a potentially standardized and automated method to detect and quantify disease.