Dual-modality endoscopic probe for tissue surface shape reconstruction and hyperspectral imaging enabled by deep neural networks.

Journal: Medical image analysis
PMID:

Abstract

Surgical guidance and decision making could be improved with accurate and real-time measurement of intra-operative data including shape and spectral information of the tissue surface. In this work, a dual-modality endoscopic system has been proposed to enable tissue surface shape reconstruction and hyperspectral imaging (HSI). This system centers around a probe comprised of an incoherent fiber bundle, whose fiber arrangement is different at the two ends, and miniature imaging optics. For 3D reconstruction with structured light (SL), a light pattern formed of randomly distributed spots with different colors is projected onto the tissue surface, creating artificial texture. Pattern decoding with a Convolutional Neural Network (CNN) model and a customized feature descriptor enables real-time 3D surface reconstruction at approximately 12 frames per second (FPS). In HSI mode, spatially sparse hyperspectral signals from the tissue surface can be captured with a slit hyperspectral imager in a single snapshot. A CNN based super-resolution model, namely "super-spectral-resolution" network (SSRNet), has also been developed to estimate pixel-level dense hypercubes from the endoscope cameras standard RGB images and the sparse hyperspectral signals, at approximately 2 FPS. The probe, with a 2.1 mm diameter, enables the system to be used with endoscope working channels. Furthermore, since data acquisition in both modes can be accomplished in one snapshot, operation of this system in clinical applications is minimally affected by tissue surface movement and deformation. The whole apparatus has been validated on phantoms and tissue (ex vivo and in vivo), while initial measurements on patients during laryngeal surgery show its potential in real-world clinical applications.

Authors

  • Jianyu Lin
    The Hamlyn Centre for Robotic Surgery, Imperial College London, London, UK; Department of Computing, Imperial College London, London, UK. Electronic address: xjtuljy@gmail.com.
  • Neil T Clancy
    The Hamlyn Centre for Robotic Surgery, Imperial College London, London, UK; Wellcome/EPSRC Centre for Interventional & Surgical Sciences (WEISS), University College London, London, UK; Centre for Medical Image Computing, University College London, London, UK; Department of Computer Science, University College London, London, UK; Department of Surgery and Cancer, Imperial College London, London, UK. Electronic address: n.clancy@ucl.ac.uk.
  • Ji Qi
    Institute of Advanced Research, Infervision Medical Technology Co., Ltd, Beijing, China.
  • Yang Hu
    Kweichow Moutai Co., Ltd, Renhuai, Guizhou 564501, China.
  • Taran Tatla
    Department of Otolaryngology, Northwick Park Hospital, Harrow, UK.
  • Danail Stoyanov
    University College London, London, UK.
  • Lena Maier-Hein
    German Cancer Research Center (DKFZ), Computer Assisted Medical Interventions, Heidelberg, Germany.
  • Daniel S Elson
    The Hamlyn Centre for Robotic Surgery, Imperial College London, London, UK; Department of Surgery and Cancer, Imperial College London, London, UK. Electronic address: daniel.elson@imperial.ac.uk.