Automatic deep learning-based pleural effusion classification in lung ultrasound images for respiratory pathology diagnosis.

Journal: Physica medica : PM : an international journal devoted to the applications of physics to medicine and biology : official journal of the Italian Association of Biomedical Physics (AIFB)
Published Date:

Abstract

Lung ultrasound (LUS) imaging as a point-of-care diagnostic tool for lung pathologies has been proven superior to X-ray and comparable to CT, enabling earlier and more accurate diagnosis in real-time at the patient's bedside. The main limitation to widespread use is its dependence on the operator training and experience. COVID-19 lung ultrasound findings predominantly reflect a pneumonitis pattern, with pleural effusion being infrequent. However, pleural effusion is easy to detect and to quantify, therefore it was selected as the subject of this study, which aims to develop an automated system for the interpretation of LUS of pleural effusion. A LUS dataset was collected at the Royal Melbourne Hospital which consisted of 623 videos containing 99,209 2D ultrasound images of 70 patients using a phased array transducer. A standardized protocol was followed that involved scanning six anatomical regions providing complete coverage of the lungs for diagnosis of respiratory pathology. This protocol combined with a deep learning algorithm using a Spatial Transformer Network provides a basis for automatic pathology classification on an image-based level. In this work, the deep learning model was trained using supervised and weakly supervised approaches which used frame- and video-based ground truth labels respectively. The reference was expert clinician image interpretation. Both approaches show comparable accuracy scores on the test set of 92.4% and 91.1%, respectively, not statistically significantly different. However, the video-based labelling approach requires significantly less effort from clinical experts for ground truth labelling.

Authors

  • Chung-Han Tsai
    School of Clinical Sciences, Queensland University of Technology, Gardens Point Campus, 2 George St, Brisbane 4000, QLD, Australia.
  • Jeroen van der Burgt
    School of Clinical Sciences, Queensland University of Technology, Gardens Point Campus, 2 George St, Brisbane 4000, QLD, Australia.
  • Damjan Vukovic
  • Nancy Kaur
    School of IT & Systems, Faculty of Science and Technology, University of Canberra, 11 Kirinari Street, Bruce 2617, ACT, Australia.
  • Libertario Demi
  • David Canty
    Department of Surgery (Royal Melbourne Hospital), University of Melbourne, Royal Parade, Parkville 3050, VIC, Australia.
  • Andrew Wang
    Department of Surgery (Royal Melbourne Hospital), University of Melbourne, Royal Parade, Parkville 3050, VIC, Australia.
  • Alistair Royse
    Department of Surgery (Royal Melbourne Hospital), University of Melbourne, Royal Parade, Parkville 3050, VIC, Australia.
  • Colin Royse
    Department of Surgery (Royal Melbourne Hospital), University of Melbourne, Royal Parade, Parkville 3050, VIC, Australia; Outcomes Research Consortium, Cleveland Clinic, Cleveland, OH, USA.
  • Kavi Haji
    Department of Surgery (Royal Melbourne Hospital), University of Melbourne, Royal Parade, Parkville 3050, VIC, Australia.
  • Jason Dowling
    Australian e-Health Research Centre, CSIRO, Digital Productivity Flagship.
  • Girija Chetty
    School of IT & Systems, Faculty of Science and Technology, University of Canberra, 11 Kirinari Street, Bruce 2617, ACT, Australia.
  • Davide Fontanarosa
    Institute of Health Biomedical Innovation, Queensland University of Technology, Brisbane, Queensland, Australia; School of Clinical Sciences, Queensland University of Technology, Brisbane, Queensland, Australia. Electronic address: d3.fontanarosa@qut.edu.au.