Explainable artificial intelligence for pharmacovigilance: What features are important when predicting adverse outcomes?
Journal:
Computer methods and programs in biomedicine
Published Date:
Sep 26, 2021
Abstract
BACKGROUND AND OBJECTIVE: Explainable Artificial Intelligence (XAI) has been identified as a viable method for determining the importance of features when making predictions using Machine Learning (ML) models. In this study, we created models that take an individual's health information (e.g. their drug history and comorbidities) as inputs, and predict the probability that the individual will have an Acute Coronary Syndrome (ACS) adverse outcome.