Unraveling the deep learning gearbox in optical coherence tomography image segmentation towards explainable artificial intelligence.
Journal:
Communications biology
Published Date:
Feb 5, 2021
Abstract
Machine learning has greatly facilitated the analysis of medical data, while the internal operations usually remain intransparent. To better comprehend these opaque procedures, a convolutional neural network for optical coherence tomography image segmentation was enhanced with a Traceable Relevance Explainability (T-REX) technique. The proposed application was based on three components: ground truth generation by multiple graders, calculation of Hamming distances among graders and the machine learning algorithm, as well as a smart data visualization ('neural recording'). An overall average variability of 1.75% between the human graders and the algorithm was found, slightly minor to 2.02% among human graders. The ambiguity in ground truth had noteworthy impact on machine learning results, which could be visualized. The convolutional neural network balanced between graders and allowed for modifiable predictions dependent on the compartment. Using the proposed T-REX setup, machine learning processes could be rendered more transparent and understandable, possibly leading to optimized applications.
Authors
Keywords
Adult
Algorithms
Animals
Artificial Intelligence
Clinical Competence
Deep Learning
Female
Humans
Image Interpretation, Computer-Assisted
Macaca fascicularis
Machine Learning
Male
Middle Aged
Multimodal Imaging
Neural Networks, Computer
Observer Variation
Reproducibility of Results
Retina
Retinal Diseases
Retrospective Studies
Tomography, Optical Coherence