Deep neural networks enable quantitative movement analysis using single-camera videos.

Journal: Nature communications
Published Date:

Abstract

Many neurological and musculoskeletal diseases impair movement, which limits people's function and social participation. Quantitative assessment of motion is critical to medical decision-making but is currently possible only with expensive motion capture systems and highly trained personnel. Here, we present a method for predicting clinically relevant motion parameters from an ordinary video of a patient. Our machine learning models predict parameters include walking speed (r = 0.73), cadence (r = 0.79), knee flexion angle at maximum extension (r = 0.83), and Gait Deviation Index (GDI), a comprehensive metric of gait impairment (r = 0.75). These correlation values approach the theoretical limits for accuracy imposed by natural variability in these metrics within our patient population. Our methods for quantifying gait pathology with commodity cameras increase access to quantitative motion analysis in clinics and at home and enable researchers to conduct large-scale studies of neurological and musculoskeletal disorders.

Authors

  • Łukasz Kidziński
    Stanford University Department of Bioengineering, Stanford, CA, United States of America.
  • Bryan Yang
    Department of Bioengineering, Stanford University, Stanford, CA, 94305, USA.
  • Jennifer L Hicks
  • Apoorva Rajagopal
    Department of Mechanical Engineering, Stanford University, United States.
  • Scott L Delp
  • Michael H Schwartz
    Gillette Children's Specialty Healthcare, MN, United States; University of Minnesota, Department of Orthopaedic Surgery, MN, United States. Electronic address: schwa021@umn.edu.