A multi-pseudo-sensor fusion approach to estimating the lower limb joint moments based on deep neural network.
Journal:
Medical & biological engineering & computing
Published Date:
Jul 9, 2025
Abstract
Reliable feedback of gait variables, such as joint moments, is critical for designing controllers of intelligent assistive devices that can assist the wearer outdoors. To estimate lower extremity joint moments quickly and accurately outside the laboratory, a novel multimodal motion intent recognition system by fusing traditional deep learning models is proposed in this paper. The developed estimation method uses the joint kinematics data and individual feature parameters to estimate lower limb joint moments in the sagittal plane under different motion conditions: walking, running, and stair ascent and descent. Specifically, seven deep learning models that use combination of convolutional neural network, recurrent neural networks and attention mechanisms as the unit models of the framework are designed. To improve the performance of the unit models, a data augmentation module is designed in the system. Using those unit models, a novel framework, DeepMPSF-Net, which treats the output of each unit model as a pseudo-sensor observation and utilizes variable weight fusion methods to improve classification accuracy and kinetics estimation performance, is proposed. The results show that the augmented DeepMPSF-Net can accurately identify the locomotion, and the estimation performance (PCC) of joint moments is improved to 0.952 (walking), 0.988 (running), 0.925 (stair ascent), and 0.921 (stair descent), respectively. It also suggests that the estimation system is expected to contribute to the development of intelligent assistive devices for the lower limbs.
Authors
Keywords
No keywords available for this article.