Forced-exploration free Strategies for ...
Type de document :
Pré-publication ou Document de travail
Titre :
Forced-exploration free Strategies for Unimodal Bandits
Auteur(s) :
Saber, Hassan [Auteur]
Scool [Scool]
Sequential Learning [SEQUEL]
Ménard, Pierre [Auteur]
Scool [Scool]
Sequential Learning [SEQUEL]
Maillard, Odalric Ambrym [Auteur]
Scool [Scool]
Sequential Learning [SEQUEL]
Scool [Scool]
Sequential Learning [SEQUEL]
Ménard, Pierre [Auteur]
Scool [Scool]
Sequential Learning [SEQUEL]
Maillard, Odalric Ambrym [Auteur]

Scool [Scool]
Sequential Learning [SEQUEL]
Mot(s)-clé(s) en anglais :
Structured Bandits
Indexed Minimum Empirical Divergence
Optimal Strategy
Indexed Minimum Empirical Divergence
Optimal Strategy
Discipline(s) HAL :
Informatique [cs]/Théorie de l'information [cs.IT]
Statistiques [stat]/Machine Learning [stat.ML]
Statistiques [stat]/Machine Learning [stat.ML]
Résumé en anglais : [en]
We consider a multi-armed bandit problem specified by a set of Gaussian or Bernoulli distributions endowed with a unimodal structure. Although this problem has been addressed in the literature (Combes and Proutiere, 2014), ...
Lire la suite >We consider a multi-armed bandit problem specified by a set of Gaussian or Bernoulli distributions endowed with a unimodal structure. Although this problem has been addressed in the literature (Combes and Proutiere, 2014), the state-of-the-art algorithms for such structure make appear a forced-exploration mechanism. We introduce IMED-UB, the first forced-exploration free strategy that exploits the unimodal-structure, by adapting to this setting the Indexed Minimum Empirical Divergence (IMED) strategy introduced by Honda and Takemura (2015). This strategy is proven optimal. We then derive KLUCB-UB, a KLUCB version of IMED-UB, which is also proven optimal. Owing to our proof technique, we are further able to provide a concise finite-time analysis of both strategies in an unified way. Numerical experiments show that both IMED-UB and KLUCB-UB perform similarly in practice and outperform the state-of-the-art algorithms.Lire moins >
Lire la suite >We consider a multi-armed bandit problem specified by a set of Gaussian or Bernoulli distributions endowed with a unimodal structure. Although this problem has been addressed in the literature (Combes and Proutiere, 2014), the state-of-the-art algorithms for such structure make appear a forced-exploration mechanism. We introduce IMED-UB, the first forced-exploration free strategy that exploits the unimodal-structure, by adapting to this setting the Indexed Minimum Empirical Divergence (IMED) strategy introduced by Honda and Takemura (2015). This strategy is proven optimal. We then derive KLUCB-UB, a KLUCB version of IMED-UB, which is also proven optimal. Owing to our proof technique, we are further able to provide a concise finite-time analysis of both strategies in an unified way. Numerical experiments show that both IMED-UB and KLUCB-UB perform similarly in practice and outperform the state-of-the-art algorithms.Lire moins >
Langue :
Anglais
Collections :
Source :
Fichiers
- https://hal.archives-ouvertes.fr/hal-02883907/document
- Accès libre
- Accéder au document
- http://arxiv.org/pdf/2006.16569
- Accès libre
- Accéder au document
- https://hal.archives-ouvertes.fr/hal-02883907/document
- Accès libre
- Accéder au document
- https://hal.archives-ouvertes.fr/hal-02883907/document
- Accès libre
- Accéder au document
- document
- Accès libre
- Accéder au document
- Forced-exploration%20free%20Strategies%20for%20Unimodal%20Bandits.pdf
- Accès libre
- Accéder au document
- 2006.16569
- Accès libre
- Accéder au document
- document
- Accès libre
- Accéder au document
- Forced-exploration%20free%20Strategies%20for%20Unimodal%20Bandits.pdf
- Accès libre
- Accéder au document