Improved Learning Complexity in Combinatorial ...
Document type :
Communication dans un congrès avec actes
Title :
Improved Learning Complexity in Combinatorial Pure Exploration Bandits
Author(s) :
Gabillon, Victor [Auteur]
Queensland University of Technology [Brisbane] [QUT]
Lazaric, Alessandro [Auteur]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Sequential Learning [SEQUEL]
Ghavamzadeh, Mohammad [Auteur]
Sequential Learning [SEQUEL]
Adobe Systems Inc. [Adobe Advanced Technology Labs]
Ortner, Ronald [Auteur]
Montanuniversität Leoben [MUL]
Bartlett, Peter [Auteur]
Queensland University of Technology [Brisbane] [QUT]
Queensland University of Technology [Brisbane] [QUT]
Lazaric, Alessandro [Auteur]

Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Sequential Learning [SEQUEL]
Ghavamzadeh, Mohammad [Auteur]
Sequential Learning [SEQUEL]
Adobe Systems Inc. [Adobe Advanced Technology Labs]
Ortner, Ronald [Auteur]
Montanuniversität Leoben [MUL]
Bartlett, Peter [Auteur]
Queensland University of Technology [Brisbane] [QUT]
Conference title :
Proceedings of the 19th International Conference on Artificial Intelligence (AISTATS)
City :
Cadiz
Country :
Espagne
Start date of the conference :
2016-05
Publication date :
2016
HAL domain(s) :
Statistiques [stat]/Machine Learning [stat.ML]
English abstract : [en]
We study the problem of combinatorial pure exploration in the stochastic multi-armed bandit problem. We first construct a new measure of complexity that provably characterizes the learning performance of the algorithms we ...
Show more >We study the problem of combinatorial pure exploration in the stochastic multi-armed bandit problem. We first construct a new measure of complexity that provably characterizes the learning performance of the algorithms we propose for the fixed confidence and the fixed budget setting. We show that this complexity is never higher than the one in existing work and illustrate a number of configurations in which it can be significantly smaller. While in general this improvement comes at the cost of increased computational complexity, we provide a series of examples , including a planning problem, where this extra cost is not significant.Show less >
Show more >We study the problem of combinatorial pure exploration in the stochastic multi-armed bandit problem. We first construct a new measure of complexity that provably characterizes the learning performance of the algorithms we propose for the fixed confidence and the fixed budget setting. We show that this complexity is never higher than the one in existing work and illustrate a number of configurations in which it can be significantly smaller. While in general this improvement comes at the cost of increased computational complexity, we provide a series of examples , including a planning problem, where this extra cost is not significant.Show less >
Language :
Anglais
Peer reviewed article :
Oui
Audience :
Internationale
Popular science :
Non
Collections :
Source :
Files
- https://hal.inria.fr/hal-01322198/document
- Open access
- Access the document
- https://hal.inria.fr/hal-01322198/document
- Open access
- Access the document
- https://hal.inria.fr/hal-01322198/document
- Open access
- Access the document
- document
- Open access
- Access the document
- AISTATS_full_CR.pdf
- Open access
- Access the document
- document
- Open access
- Access the document
- AISTATS_full_CR.pdf
- Open access
- Access the document