The need for balancing 'black box' systems and explainable artificial intelligence: A necessary implementation in radiology.
Journal:
European journal of radiology
PMID:
40031377
Abstract
Radiology is one of the medical specialties most significantly impacted by Artificial Intelligence (AI). AI systems, particularly those employing machine and deep learning, excel in processing large datasets and comparing images from similar contexts, fulfilling radiological demands. However, the implementation of AI in radiology presents notable challenges, including concerns about data privacy, informed consent, and the potential for external interferences affecting decision-making processes. Biases represent another critical issue, often stemming from unrepresentative datasets or inadequate system training, which can lead to distorted outcomes and exacerbate healthcare inequalities. Additionally, generative AI systems may produce 'hallucinations' arising from their reliance on probabilistic modeling without the ability to distinguish between true and false information. Such risks raise ethical and legal questions, especially when AI-induced errors harm patient health. Concerning liability for medical errors involving AI, healthcare professionals currently retain full accountability for their decisions. AI systems remain tools to support, not replace, human expertise and judgment. Nevertheless, the "black box" nature of many AI models - wherein the reasoning behind outputs remains opaque - limits the possibility of fully informed consent. We advocate for prioritizing Explainable Artificial Intelligence (XAI) in radiology. While potentially less performant than black-box models, XAI enhances transparency, allowing patients to understand how their data is used and how AI influences clinical decisions, aligning with ethical standards.