Automatic prostate segmentation using deep learning on clinically diverse 3D transrectal ultrasound images.

Journal: Medical physics
Published Date:

Abstract

PURPOSE: Needle-based procedures for diagnosing and treating prostate cancer, such as biopsy and brachytherapy, have incorporated three-dimensional (3D) transrectal ultrasound (TRUS) imaging to improve needle guidance. Using these images effectively typically requires the physician to manually segment the prostate to define the margins used for accurate registration, targeting, and other guidance techniques. However, manual prostate segmentation is a time-consuming and difficult intraoperative process, often occurring while the patient is under sedation (biopsy) or anesthetic (brachytherapy). Minimizing procedure time with a 3D TRUS prostate segmentation method could provide physicians with a quick and accurate prostate segmentation, and allow for an efficient workflow with improved patient throughput to enable faster patient access to care. The purpose of this study was to develop a supervised deep learning-based method to segment the prostate in 3D TRUS images from different facilities, generated using multiple acquisition methods and commercial ultrasound machine models to create a generalizable algorithm for needle-based prostate cancer procedures.

Authors

  • Nathan Orlando
    Department of Medical Biophysics, Western University, London, ON, N6A 3K7, Canada.
  • Derek J Gillies
    Department of Medical Biophysics, Western University, London, ON, N6A 3K7, Canada.
  • Igor Gyacskov
    Robarts Research Institute, Western University, London, ON, N6A 3K7, Canada.
  • Cesare Romagnoli
    Department of Medical Imaging, Western University, London, ON, N6A 3K7, Canada.
  • David D'Souza
    London Health Sciences Centre, London, ON, N6A 5W9, Canada.
  • Aaron Fenster
    Imaging Research Laboratories, Robarts Research Institute, 100 Perth Drive, London, Ontario N6A 5K8, Canada.