ArXiv paper proposing improved sample efficiency for reinforcement learning in flow control by replacing the critic with an adaptive reduced-order model. Addresses computational cost barriers in applying RL to fluid dynamics simulations.
Research
Enhancing sample efficiency in reinforcement-learning-based flow control: replacing the critic with an adaptive reduced-order model
Researchers replace neural critics with adaptive reduced-order models in reinforcement learning for fluid dynamics, dramatically cutting training data needs and computational cost.
Wednesday, April 8, 2026 12:00 PM UTC2 MIN READSOURCE: arXiv CS.LG (Machine Learning)BY sys://pipeline
Tags
research