Ensemble of deep learning and IoT technologies for improved safety in smart indoor activity monitoring for visually impaired individuals.
Journal:
Scientific reports
Published Date:
Jul 30, 2025
Abstract
Old and vision-impaired indoor action monitoring utilizes sensor technology to observe movement and interaction in the living area. This model can recognize changes from regular patterns, deliver alerts, and ensure safety in case of any dangers or latent risks. These solutions improve quality of life by promoting independence while providing peace of mind to loved ones and caregivers. Visual impairment challenges daily independence, and deep learning (DL)-based Human Activity Recognition (HAR) enhances safe, real-time task performance for the visually impaired. For individuals with visual impairments, it enhances independence and safety in daily tasks while supporting caregivers with timely alerts and monitoring. This paper develops an Ensemble of Deep Learning for Enhanced Safety in Smart Indoor Activity Monitoring (EDLES-SIAM) technique for visually impaired people. The EDLES-SIAM technique is primarily designed to enhance indoor activity monitoring, ensuring the safety of visually impaired people in IoT technologies. Initially, the proposed EDLES-SIAM technique performs image pre-processing using adaptive bilateral filtering (ABF) to reduce noise and enhance sensor data quality. Furthermore, the ResNet50 model is employed for feature extraction to capture complex spatial patterns in visual data. For detecting indoor activities, an ensemble DL classifier contains three approaches: deep neural network (DNN), bidirectional long short-term memory (BiLSTM), and sparse stacked autoencoder (SSAE). A wide range of simulation analyses are implemented to ensure the enhanced performance of the EDLES-SIAM method under the fall detection dataset. The performance validation of the EDLES-SIAM method portrayed a superior [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text] of 99.25%, 98.00%, 98.53%, and 98.23% over existing techniques in terms of dissimilar evaluation measures.