Visual modalities-based multimodal fusion for surgical phase recognition.

Journal: Computers in biology and medicine
Published Date:

Abstract

Surgical workflow analysis is essential to help optimize surgery by encouraging efficient communication and the use of resources. However, the performance of phase recognition is limited by the use of information related to the presence of surgical instruments. To address the problem, we propose visual modality-based multimodal fusion for surgical phase recognition to overcome the limited diversity of information such as the presence of instruments. Using the proposed methods, we extracted a visual kinematics-based index related to using instruments, such as movement and their interrelations during surgery. In addition, we improved recognition performance using an effective convolutional neural network (CNN)-based fusion method for visual features and a visual kinematics-based index (VKI). The visual kinematics-based index improves the understanding of a surgical procedure since information is related to instrument interaction. Furthermore, these indices can be extracted in any environment, such as laparoscopic surgery, and help obtain complementary information for system kinematics log errors. The proposed methodology was applied to two multimodal datasets, a virtual reality (VR) simulator-based dataset (PETRAW) and a private distal gastrectomy surgery dataset, to verify that it can help improve recognition performance in clinical environments. We also explored the influence of a visual kinematics-based index to recognize each surgical workflow by the instrument's existence and the instrument's trajectory. Through the experimental results of a distal gastrectomy video dataset, we validated the effectiveness of our proposed fusion approach in surgical phase recognition. The relatively simple yet index-incorporated fusion we propose can yield significant performance improvements over only CNN-based training and exhibits effective training results compared to fusion based on Transformers, which require a large amount of pre-trained data.

Authors

  • Bogyu Park
    AI Dev. Group, Hutom, Dokmak-ro 279, Mapo-gu, 04151, Seoul, Republic of Korea. Electronic address: bgpark@hutom.io.
  • Hyeongyu Chi
    AI Dev. Group, Hutom, Dokmak-ro 279, Mapo-gu, 04151, Seoul, Republic of Korea. Electronic address: hyeongyuc96@hutom.io.
  • Bokyung Park
    AI Dev. Group, Hutom, Dokmak-ro 279, Mapo-gu, 04151, Seoul, Republic of Korea. Electronic address: bokyung@hutom.io.
  • Jiwon Lee
    Department of Pediatrics, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea.
  • Hye Su Jin
    AI Dev. Group, Hutom, Dokmak-ro 279, Mapo-gu, 04151, Seoul, Republic of Korea. Electronic address: jiwon@hutom.io.
  • Sunghyun Park
    Yonsei University College of Medicine, Yonsei-ro 50, Seodaemun-gu, 03722, Seoul, Republic of Korea. Electronic address: CODON@yush.ac.
  • Woo Jin Hyung
    Department of Surgery, Yonsei University College of Medicine, 50-1 Yonsei-ro, Seodaemun-gu, Seoul, 120-752, Republic of Korea. wjhyung@yuhs.ac.
  • Min-Kook Choi
    AI Dev. Group, Hutom, Dokmak-ro 279, Mapo-gu, 04151, Seoul, Republic of Korea. Electronic address: mkchoi@hutom.io.