Regret Minimization in MDPs with Options ...
Document type :
Communication dans un congrès avec actes
Title :
Regret Minimization in MDPs with Options without Prior Knowledge
Author(s) :
Fruit, Ronan [Auteur]
Sequential Learning [SEQUEL]
Pirotta, Matteo [Auteur]
Sequential Learning [SEQUEL]
Lazaric, Alessandro [Auteur]
Sequential Learning [SEQUEL]
Brunskill, Emma [Auteur]
Computer Science Department - Carnegie Mellon University
Sequential Learning [SEQUEL]
Pirotta, Matteo [Auteur]
Sequential Learning [SEQUEL]
Lazaric, Alessandro [Auteur]

Sequential Learning [SEQUEL]
Brunskill, Emma [Auteur]
Computer Science Department - Carnegie Mellon University
Conference title :
NIPS 2017 - Neural Information Processing Systems
City :
Long Beach
Country :
Etats-Unis d'Amérique
Start date of the conference :
2017-12-04
Publication date :
2017-12
HAL domain(s) :
Statistiques [stat]/Machine Learning [stat.ML]
English abstract : [en]
The option framework integrates temporal abstraction into the reinforcement learning model through the introduction of macro-actions (i.e., options). Recent works leveraged the mapping of Markov decision processes (MDPs) ...
Show more >The option framework integrates temporal abstraction into the reinforcement learning model through the introduction of macro-actions (i.e., options). Recent works leveraged the mapping of Markov decision processes (MDPs) with options to semi-MDPs (SMDPs) and introduced SMDP-versions of exploration-exploitation algorithms (e.g., RMAX-SMDP and UCRL-SMDP) to analyze the impact of options on the learning performance. Nonetheless, the PAC-SMDP sample complexity of RMAX-SMDP can hardly be translated into equivalent PAC-MDP theoretical guarantees, while the regret analysis of UCRL-SMDP requires prior knowledge of the distributions of the cumulative reward and duration of each option, which are hardly available in practice. In this paper, we remove this limitation by combining the SMDP view together with the inner Markov structure of options into a novel algorithm whose regret performance matches UCRL-SMDP's up to an additive regret term. We show scenarios where this term is negligible and the advantage of temporal abstraction is preserved. We also report preliminary empirical results supporting the theoretical findings.Show less >
Show more >The option framework integrates temporal abstraction into the reinforcement learning model through the introduction of macro-actions (i.e., options). Recent works leveraged the mapping of Markov decision processes (MDPs) with options to semi-MDPs (SMDPs) and introduced SMDP-versions of exploration-exploitation algorithms (e.g., RMAX-SMDP and UCRL-SMDP) to analyze the impact of options on the learning performance. Nonetheless, the PAC-SMDP sample complexity of RMAX-SMDP can hardly be translated into equivalent PAC-MDP theoretical guarantees, while the regret analysis of UCRL-SMDP requires prior knowledge of the distributions of the cumulative reward and duration of each option, which are hardly available in practice. In this paper, we remove this limitation by combining the SMDP view together with the inner Markov structure of options into a novel algorithm whose regret performance matches UCRL-SMDP's up to an additive regret term. We show scenarios where this term is negligible and the advantage of temporal abstraction is preserved. We also report preliminary empirical results supporting the theoretical findings.Show less >
Language :
Anglais
Peer reviewed article :
Oui
Audience :
Internationale
Popular science :
Non
Collections :
Source :
Files
- https://hal.inria.fr/hal-01649082/document
- Open access
- Access the document
- https://hal.inria.fr/hal-01649082/document
- Open access
- Access the document
- https://hal.inria.fr/hal-01649082/document
- Open access
- Access the document
- document
- Open access
- Access the document
- supplementary.pdf
- Open access
- Access the document
- document
- Open access
- Access the document
- supplementary.pdf
- Open access
- Access the document