MLFusion: Multilevel Data Fusion using CNNs for atrial fibrillation detection.

Journal: Computers in biology and medicine
PMID:

Abstract

Data fusion, involving the simultaneous integration of signals from multiple sensors, is an emerging field that facilitates more accurate inferences in instrumentation applications. This paper presents a novel fusion methodology for multi-sensor multimodal data using convolutional neural networks (CNNs), which introduces a data-driven approach to automatically select the optimal level of information abstraction for fusion. Unlike traditional methods, the proposed model allows for a self-learning fusion process, eliminating the need for manual selection of the fusion level by the designer. The model uniquely integrates feature extraction and fusion into a unified framework, enhancing efficiency and performance. Additionally, the fusion network incorporates signal quality indicators (SQIs) as input streams, enabling the model to account for the quality of each input signal during the fusion process. The methodology is evaluated on the task of atrial fibrillation detection through the fusion of two multimodal signal inputs taken from a portion of the MIMIC III database. The fusion network, using electrocardiogram (ECG) and photoplethsymogram (PPG) signals as inputs, and trained with an average loss function, achieved an accuracy of 99.33% and a sensitivity of 99.74%. The model's robustness in the presence of noisy inputs is also analyzed, demonstrating the effectiveness of the SQI-based multi-level fusion approach. This novel methodology represents a significant advancement in data fusion by offering a fully automated, quality-aware approach to multi-sensor multimodal signal fusion.

Authors

  • Arlene John
  • Keshab K Parhi
  • Barry Cardiff
  • Deepu John