Compared with the robots, humans can learn to perform various contact tasks in unstructured environments by modulating arm impedance characteristics. In this article, we consider endowing this compliant ability to the industrial robots to effectively learn to perform repetitive force-sensitive tasks. Current learning impedance control methods usually suffer from inefficiency. This paper establishes an efficient variable impedance control method. To improve the learning efficiency, we employ the probabilistic Gaussian process model as the transition dynamics of the system for internal simulation, permitting long-term inference and planning in a Bayesian manner. Then, the optimal impedance regulation strategy is searched using a model-based reinforcement learning algorithm. The effectiveness and efficiency of the proposed method are verified through force control tasks using a 6-DoFs Reinovo industrial manipulator.
In this work, a novel approach to designing an on-line tracking controller for a nonholonomic wheeled mobile robot (WMR) is presented. The controller consists of nonlinear neural feedback compensator, PD control law and supervisory element, which assure stability of the system. Neural network for feedback compensation is learned through approximate dynamic programming (ADP). To obtain stability in the learning phase and robustness in face of disturbances, an additional control signal derived from Lyapunov stability theorem based on the variable structure systems theory is provided. Verification of the proposed control algorithm was realized on a wheeled mobile robot Pioneer–2DX, and confirmed the assumed behavior of the control system.
The paper presents a method for designing a neural speed controller with use of Reinforcement Learning method. The controlled object is an electric drive with a synchronous motor with permanent magnets, having a complex mechanical structure and changeable parameters. Several research cases of the control system with a neural controller are presented, focusing on the change of object parameters. Also, the influence of the system critic behaviour is researched, where the critic is a function of control error and energy cost. It ensures long term performance stability without the need of switching off the adaptation algorithm. Numerous simulation tests were carried out and confirmed on a real stand.