A Comprehensive Video Dataset for Surgical Laparoscopic Action Analysis.
Journal:
Scientific data
Published Date:
May 24, 2025
Abstract
Laparoscopic surgery has been widely used in various surgical fields due to its minimally invasive and rapid recovery benefits. However, it demands a high level of technical expertise from surgeons. While advancements in computer vision and deep learning have significantly contributed to surgical action recognition, the effectiveness of these technologies is hindered by the limitations of existing publicly available datasets, such as their small scale, high homogeneity, and inconsistent labeling quality. To address the above issues, we developed the SLAM dataset (Surgical LAparoscopic Motions), which encompasses various surgical types such as laparoscopic cholecystectomy and appendectomy. The dataset includes annotations for seven key actions: Abdominal Entry, Use Clip, Hook Cut, Suturing, Panoramic View, Local Panoramic View, and Suction. In total, it includes 4,097 video clips, each labeled with corresponding action categories. In addition, we comprehensively validated the dataset using the ViViT model, and the experimental results showed that the dataset exhibited superior training and testing capabilities in laparoscopic surgical action recognition, with the highest classification accuracy of 85.90%. As a publicly available benchmark resource, the SLAM dataset aims to promote the development of laparoscopic surgical action recognition and artificial intelligence-driven surgery, supporting intelligent surgical robots and surgical automation.