User-Centered Methods in Explainable AI Development for Hospital Clinical Decision Support: A Scoping Review.
Journal:
Studies in health technology and informatics
PMID:
40357594
Abstract
Explainable Artificial Intelligence (XAI) offers promising advancements in enhancing transparency and usability of AI-based Clinical Decision Support Systems (CDSS) in healthcare settings. These tools aim to improve clinical outcomes by assisting with diagnosis, treatment planning, and risk prediction. However, integrating XAI into clinical workflows requires effective involvement of healthcare professionals to ensure that the explanations provided by these tools are comprehensible, relevant, and actionable. This scoping review aimed to investigate how (potential) end users were involved in the design and development of XAI-based CDSS for hospitals. A systematic search of Medline, Embase, and Web of Science identified 11 studies meeting the inclusion criteria. Interviews and focus groups, mainly with physicians, were common, while some included nurses and developers. Four of the 11 studies engaged users across multiple stages, from pre-design to prototype testing, and specifically tested different explanation techniques with end-users. A quality assessment of papers found some studies had unclear recruitment strategies and insufficiently detailed analyses. Future work should engage end-users early in the design process, include health professionals with diverse experiences and backgrounds, and test explanation techniques to ensure appropriate methods that align with cognitive processes are chosen.