A ubiquitous and interoperable deep learning model for automatic detection of pleomorphic gastroesophageal lesions.
Journal:
Scientific reports
Published Date:
Jul 2, 2025
Abstract
In recent years, artificial intelligence (AI) has been widely explored to enhance capsule endoscopy (CE), with the goal of improving the efficiency of the reading process. While most AI models have been developed for small bowel and colon analysis, the development of esophagogastric (E-G) models has been limited due to the scarcity of frames captured during the procedure, making it challenging to develop a robust model. This study aims to develop an interoperable ubiquitous model capable detecting pleomorphic lesions in the E-G tract. We included 59,482 E-G frames, from 774 CE procedures of 5 centers, to develop a Convolutional Neural Network (CNN). The dataset was divided following an exam-based split, with 90% allocated for training - including a 5-fold cross validation - while the remaining was used for testing. The primary outcomes were: sensitivity, specificity, accuracy and area under the curve (AUC). During training, the CNN achieved mean sensitivity of 85.0% (IC95% 76.5-93.5), specificity of 96.3% (IC95% 93.9-98.7), accuracy of 93.6% (IC95% 91.7-95.4), and AUC-ROC of 0.98 (IC95% 0.97-0.98). During testing, the sensitivity, specificity and accuracy of CNN were 92.2%, 95.1% and 94.6%, respectively. CE-AI models capable of assessing E-G are crucial step toward developing a truly robust panendoscopic model. This ubiquitous and interoperable model capable of lesion detection in both esophagus and stomach, achieved good overall accuracy. However, prospective real-world studies comparing its performance with standard upper endoscopy are still needed to validate its clinical applicability.