A Provably Efficient Sample Collection ...
Type de document :
Communication dans un congrès avec actes
Titre :
A Provably Efficient Sample Collection Strategy for Reinforcement Learning
Auteur(s) :
Tarbouriech, Jean [Auteur]
Facebook AI Research [Paris] [FAIR]
Scool [Scool]
Pirotta, Matteo [Auteur]
Facebook AI Research [Paris] [FAIR]
Valko, Michal [Auteur]
DeepMind [Paris]
Lazaric, Alessandro [Auteur]
Facebook AI Research [Paris] [FAIR]
Facebook AI Research [Paris] [FAIR]
Scool [Scool]
Pirotta, Matteo [Auteur]
Facebook AI Research [Paris] [FAIR]
Valko, Michal [Auteur]
DeepMind [Paris]
Lazaric, Alessandro [Auteur]
Facebook AI Research [Paris] [FAIR]
Titre de la manifestation scientifique :
Neural Information Processing Systems (NeurIPS)
Ville :
Virtual/Sydney
Pays :
Australie
Date de début de la manifestation scientifique :
2021-12-06
Discipline(s) HAL :
Statistiques [stat]/Machine Learning [stat.ML]
Informatique [cs]/Apprentissage [cs.LG]
Informatique [cs]/Apprentissage [cs.LG]
Résumé en anglais : [en]
One of the challenges in online reinforcement learning (RL) is that the agent needs to trade off the exploration of the environment and the exploitation of the samples to optimize its behavior. Whether we optimize for ...
Lire la suite >One of the challenges in online reinforcement learning (RL) is that the agent needs to trade off the exploration of the environment and the exploitation of the samples to optimize its behavior. Whether we optimize for regret, sample complexity, state-space coverage or model estimation, we need to strike a different exploration-exploitation trade-off. In this paper, we propose to tackle the exploration-exploitation problem following a decoupled approach composed of: 1) An "objective-specific" algorithm that (adaptively) prescribes how many samples to collect at which states, as if it has access to a generative model (i.e., a simulator of the environment); 2) An "objective-agnostic" sample collection exploration strategy responsible for generating the prescribed samples as fast as possible. Building on recent methods for exploration in the stochastic shortest path problem, we first provide an algorithm that, given as input the number of samples b(s, a) needed in each state-action pair, requires O(BD + D^3/2 S^2 A) time steps to collect the B = \sum_{s,a} b(s, a) desired samples, in any unknown communicating MDP with S states, A actions and diameter D. Then we show how this general-purpose exploration algorithm can be paired with "objective-specific" strategies that prescribe the sample requirements to tackle a variety of settings-e.g., model estimation, sparse reward discovery, goal-free cost-free exploration in communicating MDPs - for which we obtain improved or novel sample complexity guarantees.Lire moins >
Lire la suite >One of the challenges in online reinforcement learning (RL) is that the agent needs to trade off the exploration of the environment and the exploitation of the samples to optimize its behavior. Whether we optimize for regret, sample complexity, state-space coverage or model estimation, we need to strike a different exploration-exploitation trade-off. In this paper, we propose to tackle the exploration-exploitation problem following a decoupled approach composed of: 1) An "objective-specific" algorithm that (adaptively) prescribes how many samples to collect at which states, as if it has access to a generative model (i.e., a simulator of the environment); 2) An "objective-agnostic" sample collection exploration strategy responsible for generating the prescribed samples as fast as possible. Building on recent methods for exploration in the stochastic shortest path problem, we first provide an algorithm that, given as input the number of samples b(s, a) needed in each state-action pair, requires O(BD + D^3/2 S^2 A) time steps to collect the B = \sum_{s,a} b(s, a) desired samples, in any unknown communicating MDP with S states, A actions and diameter D. Then we show how this general-purpose exploration algorithm can be paired with "objective-specific" strategies that prescribe the sample requirements to tackle a variety of settings-e.g., model estimation, sparse reward discovery, goal-free cost-free exploration in communicating MDPs - for which we obtain improved or novel sample complexity guarantees.Lire moins >
Langue :
Anglais
Comité de lecture :
Oui
Audience :
Internationale
Vulgarisation :
Non
Collections :
Source :
Fichiers
- https://hal.inria.fr/hal-03479827/document
- Accès libre
- Accéder au document
- https://hal.inria.fr/hal-03479827/document
- Accès libre
- Accéder au document
- https://hal.inria.fr/hal-03479827/document
- Accès libre
- Accéder au document
- document
- Accès libre
- Accéder au document
- A%20Provably%20Efficient%20Sample%20Collection%20Strategy%20for%20Reinforcement%20Learning.pdf
- Accès libre
- Accéder au document