Efficient Algorithms for Extreme Bandits
Document type :
Communication dans un congrès avec actes
Title :
Efficient Algorithms for Extreme Bandits
Author(s) :
Baudry, Dorian [Auteur]
Scool [Scool]
Centre National de la Recherche Scientifique [CNRS]
Russac, Yoan [Auteur]
Département d'informatique - ENS-PSL [DI-ENS]
Université Paris Sciences et Lettres [PSL]
Kaufmann, Emilie [Auteur]
Centre National de la Recherche Scientifique [CNRS]
Scool [Scool]
Scool [Scool]
Centre National de la Recherche Scientifique [CNRS]
Russac, Yoan [Auteur]
Département d'informatique - ENS-PSL [DI-ENS]
Université Paris Sciences et Lettres [PSL]
Kaufmann, Emilie [Auteur]
Centre National de la Recherche Scientifique [CNRS]
Scool [Scool]
Conference title :
International conference on Artificial Intelligence and Statistics (AISTATS)
City :
Virtual Conference
Country :
Espagne
Start date of the conference :
2022-03-28
Journal title :
Proceedings of Machine Learning Research (PMLR)
HAL domain(s) :
Statistiques [stat]/Machine Learning [stat.ML]
English abstract : [en]
In this paper, we contribute to the Extreme Bandit problem, a variant of Multi-Armed Bandits in which the learner seeks to collect the largest possible reward. We first study the concentration of the maximum of i.i.d random ...
Show more >In this paper, we contribute to the Extreme Bandit problem, a variant of Multi-Armed Bandits in which the learner seeks to collect the largest possible reward. We first study the concentration of the maximum of i.i.d random variables under mild assumptions on the tail of the rewards distributions. This analysis motivates the introduction of Quantile of Maxima (QoMax). The properties of QoMax are sufficient to build an Explore-Then-Commit (ETC) strategy, QoMax-ETC, achieving strong asymptotic guarantees despite its simplicity. We then propose and analyze a more adaptive, anytime algorithm, QoMax-SDA, which combines QoMax with a subsampling method recently introduced by Baudry et al. (2021). Both algorithms are more efficient than existing approaches in two aspects (1) they lead to better empirical performance (2) they enjoy a significant reduction of the memory and time complexities.Show less >
Show more >In this paper, we contribute to the Extreme Bandit problem, a variant of Multi-Armed Bandits in which the learner seeks to collect the largest possible reward. We first study the concentration of the maximum of i.i.d random variables under mild assumptions on the tail of the rewards distributions. This analysis motivates the introduction of Quantile of Maxima (QoMax). The properties of QoMax are sufficient to build an Explore-Then-Commit (ETC) strategy, QoMax-ETC, achieving strong asymptotic guarantees despite its simplicity. We then propose and analyze a more adaptive, anytime algorithm, QoMax-SDA, which combines QoMax with a subsampling method recently introduced by Baudry et al. (2021). Both algorithms are more efficient than existing approaches in two aspects (1) they lead to better empirical performance (2) they enjoy a significant reduction of the memory and time complexities.Show less >
Language :
Anglais
Peer reviewed article :
Oui
Audience :
Internationale
Popular science :
Non
Collections :
Source :
Files
- https://hal.archives-ouvertes.fr/hal-03741302/document
- Open access
- Access the document
- http://arxiv.org/pdf/2203.10883
- Open access
- Access the document
- https://hal.archives-ouvertes.fr/hal-03741302/document
- Open access
- Access the document
- https://hal.archives-ouvertes.fr/hal-03741302/document
- Open access
- Access the document
- https://hal.archives-ouvertes.fr/hal-03741302/document
- Open access
- Access the document
- RBK22.pdf
- Open access
- Access the document
- 2203.10883
- Open access
- Access the document
- document
- Open access
- Access the document