• English
    • français
  • Help
  •  | 
  • Contact
  •  | 
  • About
  •  | 
  • Login
  • HAL portal
  •  | 
  • Pages Pro
  • EN
  •  / 
  • FR
View Item 
  •   LillOA Home
  • Liste des unités
  • Centre de Recherche en Informatique, Signal et Automatique de Lille (CRIStAL) - UMR 9189
  • View Item
  •   LillOA Home
  • Liste des unités
  • Centre de Recherche en Informatique, Signal et Automatique de Lille (CRIStAL) - UMR 9189
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Reinforcement Learning in Linear MDPs: ...
  • BibTeX
  • CSV
  • Excel
  • RIS

Document type :
Communication dans un congrès avec actes
Title :
Reinforcement Learning in Linear MDPs: Constant Regret and Representation Selection
Author(s) :
Papini, Matteo [Auteur]
Tirinzoni, Andrea [Auteur]
Scool [Scool]
Pacchiano, Aldo [Auteur]
Microsoft Research
Restilli, Marcello [Auteur]
Lazaric, Alessandro [Auteur]
Facebook AI Research [Paris] [FAIR]
Pirotta, Matteo [Auteur]
Facebook AI Research [Paris] [FAIR]
Conference title :
Thirty-Fifth Conference on Neural Information Processing Systems
City :
Virtual
Country :
France
Start date of the conference :
2021-12-06
HAL domain(s) :
Informatique [cs]/Apprentissage [cs.LG]
English abstract : [en]
We study the role of the representation of state-action value functions in regret minimization in finite-horizon Markov Decision Processes (MDPs) with linear structure. We first derive a necessary condition on the ...
Show more >
We study the role of the representation of state-action value functions in regret minimization in finite-horizon Markov Decision Processes (MDPs) with linear structure. We first derive a necessary condition on the representation, called universally spanning optimal features (UNISOFT), to achieve constant regret in any MDP with linear reward function. This result encompasses the well-known settings of low-rank MDPs and, more generally, zero inherent Bellman error (also known as the Bellman closure assumption). We then demonstrate that this condition is also sufficient for these classes of problems by deriving a constant regret bound for two optimistic algorithms (LSVI-UCB and ELEANOR). Finally, we propose an algorithm for representation selection and we prove that it achieves constant regret when one of the given representations, or a suitable combination of them, satisfies the UNISOFT condition.Show less >
Language :
Anglais
Peer reviewed article :
Oui
Audience :
Internationale
Popular science :
Non
Collections :
  • Centre de Recherche en Informatique, Signal et Automatique de Lille (CRIStAL) - UMR 9189
Source :
Harvested from HAL
Files
Thumbnail
  • https://hal.inria.fr/hal-03479324/document
  • Open access
  • Access the document
Thumbnail
  • https://hal.inria.fr/hal-03479324/document
  • Open access
  • Access the document
Thumbnail
  • https://hal.inria.fr/hal-03479324/document
  • Open access
  • Access the document
Thumbnail
  • https://hal.inria.fr/hal-03479324/document
  • Open access
  • Access the document
Université de Lille

Mentions légales
Université de Lille © 2017