Reinforcement Learning in Linear MDPs: ...
Type de document :
Communication dans un congrès avec actes
Titre :
Reinforcement Learning in Linear MDPs: Constant Regret and Representation Selection
Auteur(s) :
Papini, Matteo [Auteur]
Department of Electronics, Information, and Bioengineering [Milano] [DEIB]
Tirinzoni, Andrea [Auteur]
Scool [Scool]
Pacchiano, Aldo [Auteur]
Microsoft Research
Restilli, Marcello [Auteur]
Department of Electronics, Information, and Bioengineering [Milano] [DEIB]
Lazaric, Alessandro [Auteur]
Facebook AI Research [Paris] [FAIR]
Pirotta, Matteo [Auteur]
Facebook AI Research [Paris] [FAIR]
Department of Electronics, Information, and Bioengineering [Milano] [DEIB]
Tirinzoni, Andrea [Auteur]
Scool [Scool]
Pacchiano, Aldo [Auteur]
Microsoft Research
Restilli, Marcello [Auteur]
Department of Electronics, Information, and Bioengineering [Milano] [DEIB]
Lazaric, Alessandro [Auteur]
Facebook AI Research [Paris] [FAIR]
Pirotta, Matteo [Auteur]
Facebook AI Research [Paris] [FAIR]
Titre de la manifestation scientifique :
Thirty-Fifth Conference on Neural Information Processing Systems
Ville :
Virtual
Pays :
France
Date de début de la manifestation scientifique :
2021-12-06
Discipline(s) HAL :
Informatique [cs]/Apprentissage [cs.LG]
Résumé en anglais : [en]
We study the role of the representation of state-action value functions in regret minimization in finite-horizon Markov Decision Processes (MDPs) with linear structure. We first derive a necessary condition on the ...
Lire la suite >We study the role of the representation of state-action value functions in regret minimization in finite-horizon Markov Decision Processes (MDPs) with linear structure. We first derive a necessary condition on the representation, called universally spanning optimal features (UNISOFT), to achieve constant regret in any MDP with linear reward function. This result encompasses the well-known settings of low-rank MDPs and, more generally, zero inherent Bellman error (also known as the Bellman closure assumption). We then demonstrate that this condition is also sufficient for these classes of problems by deriving a constant regret bound for two optimistic algorithms (LSVI-UCB and ELEANOR). Finally, we propose an algorithm for representation selection and we prove that it achieves constant regret when one of the given representations, or a suitable combination of them, satisfies the UNISOFT condition.Lire moins >
Lire la suite >We study the role of the representation of state-action value functions in regret minimization in finite-horizon Markov Decision Processes (MDPs) with linear structure. We first derive a necessary condition on the representation, called universally spanning optimal features (UNISOFT), to achieve constant regret in any MDP with linear reward function. This result encompasses the well-known settings of low-rank MDPs and, more generally, zero inherent Bellman error (also known as the Bellman closure assumption). We then demonstrate that this condition is also sufficient for these classes of problems by deriving a constant regret bound for two optimistic algorithms (LSVI-UCB and ELEANOR). Finally, we propose an algorithm for representation selection and we prove that it achieves constant regret when one of the given representations, or a suitable combination of them, satisfies the UNISOFT condition.Lire moins >
Langue :
Anglais
Comité de lecture :
Oui
Audience :
Internationale
Vulgarisation :
Non
Collections :
Source :
Fichiers
- https://hal.inria.fr/hal-03479324/document
- Accès libre
- Accéder au document
- https://hal.inria.fr/hal-03479324/document
- Accès libre
- Accéder au document
- https://hal.inria.fr/hal-03479324/document
- Accès libre
- Accéder au document
- https://hal.inria.fr/hal-03479324/document
- Accès libre
- Accéder au document
- document
- Accès libre
- Accéder au document
- unisoft.pdf
- Accès libre
- Accéder au document