Part 2: Deep reinforcement learning (RL), a data-driven method capable of discovering complex control strategies for high-dimensional systems, requires substantial interactions with the target system, making it costly when the system is computationally or experimentally expensive (e.g. flow control). We mitigate this challenge by combining dimension reduction via an autoencoder with a neural ODE framework to learn a low-dimensional dynamical model, which we substitute in place of the true system during RL training to efficiently estimate the control policy. We apply our method to data from the Kuramoto-Sivashinsky equation. With a goal of minimizing dissipation, we extract control policies from the model using RL and show that the model-based strategies perform well on the full dynamical system and highlight that the RL agent discovers and stabilizes a forced equilibrium solution, despite never having been given explicit information about this state’s existence.
- Tags
-