Planning in Markov Decision Processes with ...
Document type :
Communication dans un congrès avec actes
Title :
Planning in Markov Decision Processes with Gap-Dependent Sample Complexity
Author(s) :
Jonsson, Anders [Auteur]
Kaufmann, Emilie [Auteur]
Scool [Scool]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Ménard, Pierre [Auteur]
Scool [Scool]
Domingues, Omar [Auteur]
Scool [Scool]
Leurent, Edouard [Auteur]
Scool [Scool]
RENAULT
Valko, Michal [Auteur]
DeepMind [Paris]
Kaufmann, Emilie [Auteur]

Scool [Scool]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Ménard, Pierre [Auteur]
Scool [Scool]
Domingues, Omar [Auteur]
Scool [Scool]
Leurent, Edouard [Auteur]
Scool [Scool]
RENAULT
Valko, Michal [Auteur]

DeepMind [Paris]
Conference title :
Neural Information Processing Systems
City :
Vancouver
Country :
France
Start date of the conference :
2020
Publication date :
2020-12-07
HAL domain(s) :
Statistiques [stat]/Machine Learning [stat.ML]
English abstract : [en]
We propose MDP-GapE, a new trajectory-based Monte-Carlo Tree Search algorithm for planning in a Markov Decision Process in which transitions have a finite support. We prove an upper bound on the number of calls to the ...
Show more >We propose MDP-GapE, a new trajectory-based Monte-Carlo Tree Search algorithm for planning in a Markov Decision Process in which transitions have a finite support. We prove an upper bound on the number of calls to the generative models needed for MDP-GapE to identify a near-optimal action with high probability. This problem-dependent sample complexity result is expressed in terms of the sub-optimality gaps of the state-action pairs that are visited during exploration. Our experiments reveal that MDP-GapE is also effective in practice, in contrast with other algorithms with sample complexity guarantees in the fixed-confidence setting, that are mostly theoretical.Show less >
Show more >We propose MDP-GapE, a new trajectory-based Monte-Carlo Tree Search algorithm for planning in a Markov Decision Process in which transitions have a finite support. We prove an upper bound on the number of calls to the generative models needed for MDP-GapE to identify a near-optimal action with high probability. This problem-dependent sample complexity result is expressed in terms of the sub-optimality gaps of the state-action pairs that are visited during exploration. Our experiments reveal that MDP-GapE is also effective in practice, in contrast with other algorithms with sample complexity guarantees in the fixed-confidence setting, that are mostly theoretical.Show less >
Language :
Anglais
Peer reviewed article :
Oui
Audience :
Internationale
Popular science :
Non
Collections :
Source :
Files
- https://hal.archives-ouvertes.fr/hal-02863486v2/document
- Open access
- Access the document
- https://hal.archives-ouvertes.fr/hal-02863486v2/file/budget.pdf
- Open access
- Access the document
- https://hal.archives-ouvertes.fr/hal-02863486v2/file/simple_regret.pdf
- Open access
- Access the document
- https://hal.archives-ouvertes.fr/hal-02863486v2/document
- Open access
- Access the document
- https://hal.archives-ouvertes.fr/hal-02863486v2/document
- Open access
- Access the document
- document
- Open access
- Access the document
- MDPGapE_hal.pdf
- Open access
- Access the document
- budget.pdf
- Open access
- Access the document
- simple_regret.pdf
- Open access
- Access the document