MONAI Label: A framework for AI-assisted interactive labeling of 3D medical images.

Journal: Medical image analysis
Published Date:

Abstract

The lack of annotated datasets is a major bottleneck for training new task-specific supervised machine learning models, considering that manual annotation is extremely expensive and time-consuming. To address this problem, we present MONAI Label, a free and open-source framework that facilitates the development of applications based on artificial intelligence (AI) models that aim at reducing the time required to annotate radiology datasets. Through MONAI Label, researchers can develop AI annotation applications focusing on their domain of expertise. It allows researchers to readily deploy their apps as services, which can be made available to clinicians via their preferred user interface. Currently, MONAI Label readily supports locally installed (3D Slicer) and web-based (OHIF) frontends and offers two active learning strategies to facilitate and speed up the training of segmentation algorithms. MONAI Label allows researchers to make incremental improvements to their AI-based annotation application by making them available to other researchers and clinicians alike. Additionally, MONAI Label provides sample AI-based interactive and non-interactive labeling applications, that can be used directly off the shelf, as plug-and-play to any given dataset. Significant reduced annotation times using the interactive model can be observed on two public datasets.

Authors

  • Andres Diaz-Pinto
    Instituto de Investigación e Innovación en Bioingeniería, I3B, Universitat Politècnica de València, Camino de Vera s/n, 46022, Valencia, Spain. andiapin@upv.es.
  • Sachidanand Alle
    NVIDIA, 2788 San Tomas Expressway, Santa Clara, CA 95051, USA.
  • Vishwesh Nath
    Computer Science, Vanderbilt University, Nashville, TN, USA.
  • Yucheng Tang
    NVIDIA Corporation, Santa Clara and Bethesda, USA.
  • Alvin Ihsani
    NVIDIA, 2 Technology Park Drive, Westford, MA 01886, USA.
  • Muhammad Asad
    City, University of London, London, United Kingdom.
  • Fernando Pérez-García
    Department of Medical Physics and Biomedical Engineering, University College London, UK; Wellcome / EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, UK; School of Biomedical Engineering & Imaging Sciences (BMEIS), King's College London, UK. Electronic address: fernando.perezgarcia.17@ucl.ac.uk.
  • Pritesh Mehta
    School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK; Department of Medical Physics and Biomedical Engineering, University College London, London, UK.
  • Wenqi Li
    Wellcome / EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, UK. Electronic address: wenqi.li@ucl.ac.uk.
  • Mona Flores
    NVIDIA Corporation, Bethesda, MD, USA.
  • Holger R Roth
  • Tom Vercauteren
    Wellcome / EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, UK.
  • Daguang Xu
    NVIDIA, Santa Clara, CA, USA.
  • Prerna Dogra
  • Sébastien Ourselin
    Wellcome / EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, UK.
  • Andrew Feng
    NVIDIA, Santa Clara, CA, USA.
  • M Jorge Cardoso
    Department of Biomedical EngineeringSchool of Biomedical Engineering and Imaging SciencesKing's College London WC2R 2LS London U.K.