Neuroadaptive Admittance Control for Human-Robot Interaction With Human Motion Intention Estimation and Output Error Constraint.
Journal:
IEEE transactions on cybernetics
PMID:
40198290
Abstract
Human-robot interaction (HRI) is a crucial component in the field of robotics, and enabling faster response, higher accuracy, as well as smaller human effort, is essential to improve the efficiency, robustness, and applicability of HRI-driven tasks. In this article, we develop a novel neuroadaptive admittance control with human motion intention (HMI) estimation and output error constraint for natural and stable interaction. First, the interaction force information of the robot is utilized to predict the HMI and the stiffness in the admittance model is dynamically updated based on surface electromyography (sEMG) signals of the human upper limb to achieve human-like compliance. Then, based on the designed error transformation mechanism, an innovative prescribed performance control (PPC) is proposed that allows the trajectory error to converge to the given constraint range within a predefined time for any bounded initial conditions, thus enabling the robot to maintain a comprehensive performance of moving in the desired direction as guided by the human. Also, an adaptive neural network (NN) is employed to compensate for the uncertainty of robotics systems to improve the tracking accuracy further. According to the Lyapunov stability analysis criterion, our approach ensures that all states of the closed-loop system remain globally uniformly ultimately bounded. Finally, a series of real-world robot experiments demonstrate the effectiveness of the proposed framework.