Deep Learning-Based Multimodal Data Fusion: Case Study in Food Intake Episodes Detection Using Wearable Sensors.

Journal: JMIR mHealth and uHealth
Published Date:

Abstract

BACKGROUND: Multimodal wearable technologies have brought forward wide possibilities in human activity recognition, and more specifically personalized monitoring of eating habits. The emerging challenge now is the selection of most discriminative information from high-dimensional data collected from multiple sources. The available fusion algorithms with their complex structure are poorly adopted to the computationally constrained environment which requires integrating information directly at the source. As a result, more simple low-level fusion methods are needed.

Authors

  • Nooshin Bahador
  • Denzil Ferreira
    Center for Ubiquitous Computing, University of Oulu, Finland.
  • Satu Tamminen
    Faculty of Information Technology and Electrical Engineering, University of Oulu, Oulu, Finland.
  • Jukka Kortelainen