Leveraging transformers and explainable AI for Alzheimer's disease interpretability.

Journal: PloS one
Published Date:

Abstract

Alzheimer's disease (AD) is a progressive brain ailment that causes memory loss, cognitive decline, and behavioral changes. It is quite concerning that one in nine adults over the age of 65 have AD. Currently there is almost no cure for AD except very few experimental treatments. However, early detection offers chances to take part in clinical trials or other investigations looking at potential new and effective Alzheimer's treatments. To detect Alzheimer's disease, brain scans such as computed tomography (CT), magnetic resonance imaging (MRI), or positron emission tomography (PET) can be performed. Many researches have been undertaken to use computer vision on MRI images, and their accuracy ranges from 80-90%, new computer vision algorithms and cutting-edge transformers have the potential to improve this performance.We utilize advanced transformers and computer vision algorithms to enhance diagnostic accuracy, achieving an impressive 99% accuracy in categorizing Alzheimer's disease stages through translating RNA text data and brain MRI images in near-real-time. We integrate the Local Interpretable Model-agnostic Explanations (LIME) explainable AI (XAI) technique to ensure the transformers' acceptance, reliability, and human interpretability. LIME helps identify crucial features in RNA sequences or specific areas in MRI images essential for diagnosing AD.

Authors

  • Humaira Anzum
    AISIP Lab, Ahsanullah University of Science and Technology, Dhaka, Bangladesh.
  • Nabil Sadd Sammo
    AISIP Lab, Ahsanullah University of Science and Technology, Dhaka, Bangladesh.
  • Shamim Akhter
    AISIP Lab, Ahsanullah University of Science and Technology, Dhaka, Bangladesh.