A hybrid object detection approach for visually impaired persons using pigeon-inspired optimization and deep learning models.
Journal:
Scientific reports
PMID:
40113884
Abstract
Visually challenged persons include a significant part of the population, and they exist all over the globe. Recently, technology has demonstrated its occurrence in each field, and state-of-the-art devices aid humans in their everyday lives. However, visually impaired people cannot view things around their atmospheres; they can only imagine the roaming surroundings. Furthermore, web-based applications are advanced to certify their protection. Using the application, the consumer can spin the requested task to share her/his position with the family members while threatening confidentiality. Through this application, visually challenged people's family members can follow their actions (acquire snapshots and position) while staying at their residences. A deep learning (DL) technique is trained with manifold images of entities highly related to the VIPs. Training images are amplified and physically interpreted to bring more strength to the trained method. This study proposes a Hybrid Approach to Object Detection for Visually Impaired Persons Using Attention-Driven Deep Learning (HAODVIP-ADL) technique. The major intention of the HAODVIP-ADL technique is to deliver a reliable and precise object detection system that supports the visually impaired person in navigating their surroundings safely and effectively. The presented HAODVIP-ADL method initially utilizes bilateral filtering (BF) for the image pre-processing stage to reduce noise while preserving edges for clarity. For object detection, the HAODVIP-ADL method employs the YOLOv10 framework. In addition, the backbone fusion of feature extraction models such as CapsNet and InceptionV3 is implemented to capture diverse spatial and contextual information. The bi-directional long short-term memory and multi-head attention (MHA-BiLSTM) approach is utilized to classify the object detection process. Finally, the hyperparameter tuning process is performed using the pigeon-inspired optimization (PIO) approach to advance the classification performance of the MHA-BiLSTM approach. The experimental results of the HAODVIP-ADL method are analyzed, and the outcomes are evaluated using the Indoor Objects Detection dataset. The experimental validation of the HAODVIP-ADL method portrayed a superior accuracy value of 99.74% over the existing methods.