Do explainable AI (XAI) methods improve the acceptance of AI in clinical practice? An evaluation of XAI methods on Gleason grading.
Journal:
The journal of pathology. Clinical research
PMID:
40079401
Abstract
This work aimed to evaluate both the usefulness and user acceptance of five gradient-based explainable artificial intelligence (XAI) methods in the use case of a prostate carcinoma clinical decision support system environment. In addition, we aimed to determine whether XAI helps to increase the acceptance of artificial intelligence (AI) and recommend a particular method for this use case. The evaluation was conducted on a tool developed in-house with different visualization approaches to the AI-generated Gleason grade and the corresponding XAI explanations on top of the original slide. The study was a heuristic evaluation of five XAI methods. The participants were 15 pathologists from the University Hospital of Augsburg with a wide range of experience in Gleason grading and AI. The evaluation consisted of a user information form, short questionnaires on each XAI method, a ranking of the methods, and a general questionnaire to evaluate the performance and usefulness of the AI. There were significant differences between the ratings of the methods, with Grad-CAM++ performing best. Both AI decision support and XAI explanations were seen as helpful by the majority of participants. In conclusion, our pilot study suggests that the evaluated XAI methods can indeed improve the usefulness and acceptance of AI. The results obtained are a good indicator, but further studies involving larger sample sizes are warranted to draw more definitive conclusions.