Explainable artificial intelligence for pharmacovigilance: What features are important when predicting adverse outcomes?

Journal: Computer methods and programs in biomedicine
Published Date:

Abstract

BACKGROUND AND OBJECTIVE: Explainable Artificial Intelligence (XAI) has been identified as a viable method for determining the importance of features when making predictions using Machine Learning (ML) models. In this study, we created models that take an individual's health information (e.g. their drug history and comorbidities) as inputs, and predict the probability that the individual will have an Acute Coronary Syndrome (ACS) adverse outcome.

Authors

  • Isaac Ronald Ward
    School of Population & Global Health, University of Western Australia, Perth; Department of Computer Science & Software Engineering, University of Western Australia, Perth.
  • Ling Wang
    The State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, #7 Jinsui Road, Guangzhou, Guangdong 510230, China.
  • Juan Lu
    Yunnan Agricultural University, Kunming, China.
  • Mohammed Bennamoun
    School of Physics, Mathematics and Computing, University of Western Australia, Australia.
  • Girish Dwivedi
    Department of Medicine, The University of Western Australia, 35 Stirling Highway, CRAWLEY Western Australia 6009, Australia.
  • Frank M Sanfilippo
    Cardiovascular Epidemiology Research Centre, School of Population and Global Health, The University of Western Australia, Crawley, Western Australia, Australia.