Tightening Exploration in Upper Confidence ...
Document type :
Communication dans un congrès avec actes
Title :
Tightening Exploration in Upper Confidence Reinforcement Learning
Author(s) :
Bourel, Hippolyte [Auteur]
Sequential Learning [SEQUEL]
Maillard, Odalric Ambrym [Auteur]
Scool [Scool]
Sequential Learning [SEQUEL]
Talebi, Mohammad [Auteur]
Sequential Learning [SEQUEL]
Sequential Learning [SEQUEL]
Maillard, Odalric Ambrym [Auteur]

Scool [Scool]
Sequential Learning [SEQUEL]
Talebi, Mohammad [Auteur]
Sequential Learning [SEQUEL]
Conference title :
International Conference on Machine Learning
City :
Vienna
Country :
Autriche
Start date of the conference :
2020-07
HAL domain(s) :
Mathématiques [math]/Statistiques [math.ST]
Informatique [cs]/Apprentissage [cs.LG]
Informatique [cs]/Intelligence artificielle [cs.AI]
Statistiques [stat]/Machine Learning [stat.ML]
Informatique [cs]/Apprentissage [cs.LG]
Informatique [cs]/Intelligence artificielle [cs.AI]
Statistiques [stat]/Machine Learning [stat.ML]
English abstract : [en]
The upper confidence reinforcement learning (UCRL2) algorithm introduced in (Jaksch et al., 2010) is a popular method to perform regret minimization in unknown discrete Markov Decision Processes under the average-reward ...
Show more >The upper confidence reinforcement learning (UCRL2) algorithm introduced in (Jaksch et al., 2010) is a popular method to perform regret minimization in unknown discrete Markov Decision Processes under the average-reward criterion. Despite its nice and generic theoretical regret guarantees , this algorithm and its variants have remained until now mostly theoretical as numerical experiments in simple environments exhibit long burn-in phases before the learning takes place. In pursuit of practical efficiency, we present UCRL3, following the lines of UCRL2, but with two key modifications: First, it uses state-of-the-art time-uniform concentration inequalities to compute confidence sets on the reward and (component-wise) transition distributions for each state-action pair. Furthermore , to tighten exploration, it uses an adap-tive computation of the support of each transition distribution, which in turn enables us to revisit the extended value iteration procedure of UCRL2 to optimize over distributions with reduced support by disregarding low probability transitions, while still ensuring near-optimism. We demonstrate , through numerical experiments in standard environments, that reducing exploration this way yields a substantial numerical improvement compared to UCRL2 and its variants. On the theoretical side, these key modifications enable us to derive a regret bound for UCRL3 improving on UCRL2, that for the first time makes appear notions of local diameter and local effective support, thanks to variance-aware concentration bounds.Show less >
Show more >The upper confidence reinforcement learning (UCRL2) algorithm introduced in (Jaksch et al., 2010) is a popular method to perform regret minimization in unknown discrete Markov Decision Processes under the average-reward criterion. Despite its nice and generic theoretical regret guarantees , this algorithm and its variants have remained until now mostly theoretical as numerical experiments in simple environments exhibit long burn-in phases before the learning takes place. In pursuit of practical efficiency, we present UCRL3, following the lines of UCRL2, but with two key modifications: First, it uses state-of-the-art time-uniform concentration inequalities to compute confidence sets on the reward and (component-wise) transition distributions for each state-action pair. Furthermore , to tighten exploration, it uses an adap-tive computation of the support of each transition distribution, which in turn enables us to revisit the extended value iteration procedure of UCRL2 to optimize over distributions with reduced support by disregarding low probability transitions, while still ensuring near-optimism. We demonstrate , through numerical experiments in standard environments, that reducing exploration this way yields a substantial numerical improvement compared to UCRL2 and its variants. On the theoretical side, these key modifications enable us to derive a regret bound for UCRL3 improving on UCRL2, that for the first time makes appear notions of local diameter and local effective support, thanks to variance-aware concentration bounds.Show less >
Language :
Anglais
Peer reviewed article :
Oui
Audience :
Internationale
Popular science :
Non
Collections :
Source :
Files
- https://hal.archives-ouvertes.fr/hal-03000664/document
- Open access
- Access the document
- https://hal.archives-ouvertes.fr/hal-03000664/file/ICML2020_UCRL3_FinalVersion.synctex.gz
- Open access
- Access the document
- https://hal.archives-ouvertes.fr/hal-03000664/file/RS_ICML2020.pdf
- Open access
- Access the document
- https://hal.archives-ouvertes.fr/hal-03000664/file/RS_ICML2020.pdf_tex
- Open access
- Access the document
- https://hal.archives-ouvertes.fr/hal-03000664/document
- Open access
- Access the document
- https://hal.archives-ouvertes.fr/hal-03000664/document
- Open access
- Access the document
- document
- Open access
- Access the document
- ICML2020_UCRL3_FinalVersion.pdf
- Open access
- Access the document
- ICML2020_UCRL3_FinalVersion.synctex.gz
- Open access
- Access the document
- RS_ICML2020.pdf
- Open access
- Access the document
- RS_ICML2020.pdf_tex
- Open access
- Access the document
- document
- Open access
- Access the document
- ICML2020_UCRL3_FinalVersion.pdf
- Open access
- Access the document
- ICML2020_UCRL3_FinalVersion.synctex.gz
- Open access
- Access the document
- RS_ICML2020.pdf
- Open access
- Access the document
- RS_ICML2020.pdf_tex
- Open access
- Access the document