Segmenting and classifying activities in robot-assisted surgery with recurrent neural networks.

Journal: International journal of computer assisted radiology and surgery
Published Date:

Abstract

PURPOSE: Automatically segmenting and classifying surgical activities is an important prerequisite to providing automated, targeted assessment and feedback during surgical training. Prior work has focused almost exclusively on recognizing gestures, or short, atomic units of activity such as pushing needle through tissue, whereas we also focus on recognizing higher-level maneuvers, such as suture throw. Maneuvers exhibit more complexity and variability than the gestures from which they are composed, however working at this granularity has the benefit of being consistent with existing training curricula.

Authors

  • Robert DiPietro
    Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA. rdipietro@gmail.com.
  • Narges Ahmidi
    Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA.
  • Anand Malpani
    Department of Computer Science, The Johns Hopkins University, 3400 N. Charles St., Malone Hall Room 340, Baltimore, MD, 21218, USA. amalpan1@jhu.edu.
  • Madeleine Waldram
    Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA.
  • Gyusung I Lee
    Department of Surgery, Johns Hopkins University, Baltimore, MD, USA.
  • Mija R Lee
    Department of Surgery, Johns Hopkins University, Baltimore, MD, USA.
  • S Swaroop Vedula
    Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA.
  • Gregory D Hager
    Department of Computer Science, The Johns Hopkins University, 3400 N. Charles St., Malone Hall Room 340, Baltimore, MD, 21218, USA.