CATARACTS: Challenge on automatic tool annotation for cataRACT surgery.

Journal: Medical image analysis
Published Date:

Abstract

Surgical tool detection is attracting increasing attention from the medical image analysis community. The goal generally is not to precisely locate tools in images, but rather to indicate which tools are being used by the surgeon at each instant. The main motivation for annotating tool usage is to design efficient solutions for surgical workflow analysis, with potential applications in report generation, surgical training and even real-time decision support. Most existing tool annotation algorithms focus on laparoscopic surgeries. However, with 19 million interventions per year, the most common surgical procedure in the world is cataract surgery. The CATARACTS challenge was organized in 2017 to evaluate tool annotation algorithms in the specific context of cataract surgery. It relies on more than nine hours of videos, from 50 cataract surgeries, in which the presence of 21 surgical tools was manually annotated by two experts. With 14 participating teams, this challenge can be considered a success. As might be expected, the submitted solutions are based on deep learning. This paper thoroughly evaluates these solutions: in particular, the quality of their annotations are compared to that of human interpretations. Next, lessons learnt from the differential analysis of these solutions are discussed. We expect that they will guide the design of efficient surgery monitoring tools in the near future.

Authors

  • Hassan Al Hajj
    Inserm, UMR 1101, Brest F-29200, France.
  • Mathieu Lamard
    Université de Bretagne Occidentale, 3 rue des Archives, Brest F-29200, France; Inserm, UMR 1101, 22 avenue Camille-Desmoulins, Brest F-29200, France.
  • Pierre-Henri Conze
    Inserm, UMR 1101, Brest F-29200, France; Institut Mines-Télécom Atlantique, Brest F-29200, France.
  • Soumali Roychowdhury
    D-Wave Systems Inc., Burnaby, BC, V5G 4M9, Canada.
  • Xiaowei Hu
    Dept. of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China.
  • Gabija Maršalkaitė
    Oxipit, UAB, Vilnius, LT-10224, Lithuania.
  • Odysseas Zisimopoulos
    Digital Surgery Ltd, EC1V 2QY, London, UK.
  • Muneer Ahmad Dedmari
    Chair for Computer Aided Medical Procedures, Faculty of Informatics, Technical University of Munich, Garching b. Munich, 85748, Germany.
  • Fenqiang Zhao
    Key Laboratory of Biomedical Engineering of Ministry of Education, Zhejiang University, HangZhou, 310000, China.
  • Jonas Prellberg
    Dept. of Informatics, Carl von Ossietzky University, Oldenburg, 26129, Germany.
  • Manish Sahu
    Zuse Institute Berlin, Berlin, Germany. sahu@zib.de.
  • Adrian Galdran
  • Teresa Araújo
    Faculdade de Engenharia da Universidade do Porto (FEUP), R. Dr. Roberto Frias s/n, 4200-465 Porto, Portugal.
  • Duc My Vo
    Gachon University, 1342 Seongnamdaero, Sujeonggu, Seongnam, 13120, Korea.
  • Chandan Panda
    Epsilon, Bengaluru, Karnataka, 560045, India.
  • Navdeep Dahiya
    Laboratory of Computational Computer Vision, Georgia Tech, Atlanta, GA, 30332, USA.
  • Satoshi Kondo
    Konica Minolta, Inc., Osaka, 569-8503, Japan.
  • Zhengbing Bian
    D-Wave Systems Inc., Burnaby, BC, V5G 4M9, Canada.
  • Arash Vahdat
    D-Wave Systems Inc., Burnaby, BC, V5G 4M9, Canada.
  • Jonas Bialopetravičius
    Oxipit, UAB, Vilnius, LT-10224, Lithuania.
  • Evangello Flouty
    Digital Surgery Ltd, EC1V 2QY, London, UK.
  • Chenhui Qiu
    Key Laboratory of Biomedical Engineering of Ministry of Education, Zhejiang University, HangZhou, 310000, China.
  • Sabrina Dill
    Department of Visual Data Analysis, Zuse Institute Berlin, Berlin, 14195, Germany.
  • Anirban Mukhopadhyay
    Zuse Institute Berlin, Berlin, Germany.
  • Pedro Costa
  • Guilherme Aresta
    Faculdade de Engenharia da Universidade do Porto (FEUP), R. Dr. Roberto Frias s/n, 4200-465 Porto, Portugal.
  • Senthil Ramamurthy
    Laboratory of Computational Computer Vision, Georgia Tech, Atlanta, GA, 30332, USA.
  • Sang-Woong Lee
    National Research Center for Dementia, Gwangju, Republic of Korea.
  • Aurélio Campilho
    Faculdade de Engenharia da Universidade do Porto (FEUP), R. Dr. Roberto Frias s/n, 4200-465 Porto, Portugal.
  • Stefan Zachow
    Zuse Institute Berlin, Berlin, Germany.
  • Shunren Xia
    Key Laboratory of Biomedical Engineering of Ministry of Education, Zhejiang University, Hangzhou, China.
  • Sailesh Conjeti
    Chair for Computer Aided Medical Procedures, Fakultät für Informatik, Technische Universität München, Germany. Electronic address: sailesh.conjeti@tum.de.
  • Danail Stoyanov
    University College London, London, UK.
  • Jogundas Armaitis
    Oxipit, UAB, Vilnius, LT-10224, Lithuania.
  • Pheng-Ann Heng
  • William G Macready
    D-Wave Systems Inc., Burnaby, BC, V5G 4M9, Canada.
  • Béatrice Cochener
    Université de Bretagne Occidentale, 3 rue des Archives, Brest F-29200, France; Inserm, UMR 1101, 22 avenue Camille-Desmoulins, Brest F-29200, France; Service d'Ophtalmologie, CHRU Brest, 2 avenue Foch, Brest F-29200, France.
  • Gwenolé Quellec
    Inserm, UMR 1101, 22 avenue Camille-Desmoulins, Brest F-29200, France. Electronic address: gwenole.quellec@inserm.fr.