Assessment of Driver Inattention State Using Multimodal Wearable Signals and Cross-Attention-Driven Hierarchical Fusion.
Journal:
Studies in health technology and informatics
PMID:
40200458
Abstract
Identifying driver inattention is crucial for road safety, driver well-being and can be enhanced using multimodal physiological signals. However, effective fusion of multimodal data is highly challenging, particularly with intermediate fusion, where data fusion may become less informative. In this study, we address this challenge by employing a 1D Convolutional Neural Network (CNN) with Cross Hierarchical Attention Fusion on multimodal data. For this purpose, electrocardiogram (ECG) (256 Hz) and respiration (RESP) (128 Hz) signals were obtained from subjects (N=10) using textile electrodes while driving in various scenarios, specifically normal and calling. The acquired multimodal data were preprocessed, hierarchically fused, and subjected to a cross-attention mechanism to identify driver inattention. Experiments were conducted using Leave-One-Subject-Out Cross-Validation (LOSOCV). The proposed approach is able to classify driver inattention states. It was observed that shorter data segments yielded higher accuracy compared to longer segments. Additionally, multimodal data from textile electrodes effectively discriminated between driver inattention states. Therefore, the proposed approach utilizing wearable smart shirts enables non-intrusive monitoring in real-world driving scenarios.