Optimal features assisted multi-attention fusion for robust fire recognition in adverse conditions.
Journal:
Scientific reports
Published Date:
Jul 4, 2025
Abstract
Deep neural networks have significantly enhanced visual data-based fire detection systems. However, high false alarm rates, shallow-layered networks, and poor recognition in challenging environments continue to hinder their practical deployment. To address these limitations, we introduce the Attention-Enhanced Fire Recognition Network (AEFRN). This novel progressive attention-over-attention framework achieves state-of-the-art (SOTA) performance while maintaining computational efficiency. Our approach introduces three key innovations: Firstly, Convolutional Self-Attention (CSA), integrating global self-attention with convolution through dynamic kernels and trainable filters for enhanced low-level fire feature processing. Secondly, Recursive Atrous Self-Attention (RASA) with optimized dilation rates, capturing comprehensive multi-scale contextual information through a recursive formulation with minimal parameter overhead. Thirdly, an enhanced Convolutional Block Attention Module (CBAM) with modified channel and spatial attention mechanisms for robust feature discrimination. We validate AEFRN's interpretability using Grad-CAM visualization, demonstrating effective attention focus on fire-relevant regions. Comprehensive experimental evaluation on FD and BoWFire benchmark datasets shows AEFRN's superiority over SOTA methods, achieving 99.11% accuracy on the FD dataset, and 97.98% accuracy on the BoWFire dataset. Extensive comparisons against twelve SOTA approaches confirm AEFRN's effectiveness for fire detection in challenging scenarios while maintaining computational efficiency suitable for practical deployment.
Authors
Keywords
No keywords available for this article.