Alekh Agarwal - Towards a Theory of Representation Learning for Reinforcement Learning
From Katie Gentilello
views
comments
From Katie Gentilello
Provably sample-efficient reinforcement learning from rich observational inputs remains a key open challenge in research. While impressive recent advances have allowed the use of linear modelling while carrying out sample-efficient exploration and learning, the handling of more general non-linear models remains limited. In this talk, we study reinforcement learning using linear models, where the features underlying the linear model are learned, rather than apriori specified. While the broader question of representation learning for useful embeddings of complex data has seen tremendous progress, doing so in reinforcement learning presents additional challenges: good representations cannot be discovered without adequate exploration, but effective exploration is challenging in the absence of good representations. Concretely, we study this question in the context of low-rank MDPs [Jiang et al., 2017, Jin et al., 2019, Yang and Wang, 2019], where the features underlying a state-action pair are not assumed to be known, unlike most prior works. We develop two styles of methods, model-based and model-free. For the model-based method, we learn an approximate factorization of the transition model, plan within the model to obtain a fresh exploratory policy and then update our factorization with additional data. In the model-free technique, we learn features so that quantities such as value functions at subsequent states can be predicted linearly in those features. In both approaches, we address the intricate coupling between exploration and representation learning, and provide sample complexity guarantees. More details can be found at https://arxiv.org/abs/2006.10814 and https://arxiv.org/abs/2102.07035.
[Based on joint work with Jingling Chen, Nan Jiang, Sham Kakade, Akshay Krishnamurthy, Aditya Modi and Wen Sun]