Improving explanations for medical X-ray diagnosis combining variational autoencoders and adversarial machine learning.
Journal:
Computers in biology and medicine
PMID:
39999495
Abstract
Explainability in Medical Computer Vision is one of the most sensible implementations of Artificial Intelligence nowadays in healthcare. In this work, we propose a novel Deep Learning architecture for eXplainable Artificial Intelligence, specially designed for medical diagnostic. The proposed approach leverages Variational Autoencoders properties to produce linear modifications of images in a lower-dimensional embedded space, and then reconstructs these modifications into non-linear explanations in the original image space. The proposed approach is based on global and local regularisation of the latent space, which stores visual and semantic information about images. Specifically, a multi-objective genetic algorithm is designed for searching explanations, finding individuals that can misclassify the classification output of the network while producing the minimum number of changes in the image descriptor. The genetic algorithm is able to search for explanations without defining any hyperparameters, and uses only one individual to provide a complete explanation of the whole image. Furthermore, the explanations found by the proposed approach are compared with state-of-the-art eXplainable Artificial Intelligence systems and the results show an improvement in the precision of the explanation between 56.39 and 7.23 percentage points.