Ethical Perspectives on Implementing AI to Predict Mortality Risk in Emergency Department Patients: A Qualitative Study.
Journal:
Studies in health technology and informatics
PMID:
37203776
Abstract
Artificial intelligence (AI) is predicted to improve health care, increase efficiency and save time and recourses, especially in the context of emergency care where many critical decisions are made. Research shows the urgent need to develop principles and guidance to ensure ethical AI use in healthcare. This study aimed to explore healthcare professionals' perceptions of the ethical aspects of implementing an AI application to predict the mortality risk of patients in emergency departments. The analysis used an abductive qualitative content analysis based on the principles of medical ethics (autonomy, beneficence, non-maleficence, and justice), the principle of explicability, and the new principle of professional governance, that emerged from the analysis. In the analysis, two conflicts and/or considerations emerged tied to each ethical principle elucidating healthcare professionals' perceptions of the ethical aspects of implementing the AI application in emergency departments. The results were related to aspects of sharing information from the AI application, resources versus demands, providing equal care, using AI as a support system, trustworthiness to AI, AI-based knowledge, professional knowledge versus AI-based information, and conflict of interests in the healthcare system.