Model-based control methods such as control Lyapunov and control barrier functions can provide formal guarantees of stability and safety for dynamic legged locomotion, given a precise model of the system. In contrast, learning-based approaches such as reinforcement learning have demonstrated remarkable robustness and adaptability to model uncertainty in achieving quadrupedal locomotion. However, reinforcement learning based policies lack formal guarantees, which is a known limitation. In this presentation, I will demonstrate that simple techniques from nonlinear control theory can be employed to establish formal stability guarantees for reinforcement learning policies. Moreover, I will illustrate the potential of reinforcement learning for more complex bipedal and humanoid robots, as well as for loco-manipulation tasks that entail both locomotion and manipulation. This brings up an intriguing question: Is reinforcement learning alone sufficient for achieving optimal results in dynamic legged locomotion, or is there still a need for model-based control methods?
- Tags
-