DeepCut: Object Segmentation From Bounding Box Annotations Using Convolutional Neural Networks.

Journal: IEEE transactions on medical imaging
Published Date:

Abstract

In this paper, we propose DeepCut, a method to obtain pixelwise object segmentations given an image dataset labelled weak annotations, in our case bounding boxes. It extends the approach of the well-known GrabCut [1] method to include machine learning by training a neural network classifier from bounding box annotations. We formulate the problem as an energy minimisation problem over a densely-connected conditional random field and iteratively update the training targets to obtain pixelwise object segmentations. Additionally, we propose variants of the DeepCut method and compare those to a naïve approach to CNN training under weak supervision. We test its applicability to solve brain and lung segmentation problems on a challenging fetal magnetic resonance dataset and obtain encouraging results in terms of accuracy.

Authors

  • Martin Rajchl
  • Matthew C H Lee
  • Ozan Oktay
  • Konstantinos Kamnitsas
    Biomedical Image Analysis Group, Imperial College London, UK. Electronic address: konstantinos.kamnitsas12@imperial.ac.uk.
  • Jonathan Passerat-Palmbach
  • Wenjia Bai
    Department of Computing Imperial College London London UK.
  • Mellisa Damodaram
  • Mary A Rutherford
  • Joseph V Hajnal
  • Bernhard Kainz
    Biomedical Image Analysis (BioMedIA) Group, Department of Computing, Imperial College London, UK.
  • Daniel Rueckert
    Biomedical Image Analysis (BioMedIA) Group, Department of Computing, Imperial College London, UK. Electronic address: d.rueckert@imperial.ac.uk.