Mitigating patient harm risks: A proposal of requirements for AI in healthcare.
Journal:
Artificial intelligence in medicine
Published Date:
May 23, 2025
Abstract
With the rise Artificial Intelligence (AI), mitigation strategies may be needed to integrate AI-enabled medical software responsibly, ensuring ethical alignment and patient safety. This study examines how to mitigate the key risks identified by the European Parliamentary Research Service (EPRS). For that, we discuss how complementary risk-mitigation requirements may ensure the main aspects of AI in Healthcare: Reliability - Continuous performance evaluation, Continuous usability test, Encryption and use of field-tested libraries, Semantic interoperability -, Transparency - AI passport, eXplainable AI, Data quality assessment, Bias Check -, Traceability - User management, Audit trail, Review of cases-, and Responsibility - Regulation check, Academic use only disclaimer, Clinicians double check -. A survey conducted among 216 Medical ICT professionals (medical doctors, ICT staff and complementary profiles) between March and June 2024 revealed these requirements were perceived positive by all profiles. Responders deemed explainable AI and data quality assessment essential for transparency; audit trail for traceability; and regulatory compliance and clinician double check for responsibility. Clinicians rated the following requirements more relevant (p < 0.05) than technicians: continuous performance assessment, usability testing, encryption, AI passport, retrospective case review, and academic use check. Additionally, users found the AI passport more relevant for transparency than decision-makers (p < 0.05). We trust that this proposal can serve as a starting point to endow the future AI systems in medical practice with requirements to ensure their ethical deployment.
Authors
Keywords
No keywords available for this article.