Brain Tumour Segmentation and Grading Using Local and Global Context-Aggregated Attention Network Architecture.
Journal:
Bioengineering (Basel, Switzerland)
Published Date:
May 21, 2025
Abstract
Brain tumours (BTs) are among the most dangerous and life-threatening cancers in humans of all ages, and the early detection of BTs can make a huge difference to their treatment. However, grade recognition is a challenging issue for radiologists involved in automated diagnosis and healthcare monitoring. Recent research has been motivated by the search for deep learning-based mechanisms for segmentation and grading to assist radiologists in diagnostic analysis. Segmentation refers to the identification and delineation of tumour regions in medical images, while classification classifies based on tumour characteristics, such as the size, location and enhancement pattern. The main aim of this research is to design and develop an intelligent model that can detect and grade tumours more effectively. This research develops an aggregated architecture called LGCNet, which combines a local context attention network and a global context attention network. LGCNet makes use of information extracted through the task, dimension and scale. Specifically, a global context attention network is developed for capturing multiple-scale features, and a local context attention network is designed for specific tasks. Thereafter, both networks are aggregated, and the learning network is designed to balance all the tasks by combining the loss functions of the classification and segmentation. The main advantage of LGCNet is its dedicated network for a specific task. The proposed model is evaluated by considering the BraTS2019 dataset with different metrics, such as the Dice score, sensitivity, specificity and Hausdorff score. Comparative analysis with the existing model shows marginal improvement and provides scope for further research into BT segmentation and classification. The scope of this study focuses on the BraTS2019 dataset, with future work aiming to extend the applicability of the model to different clinical and imaging environments.
Authors
Keywords
No keywords available for this article.