Forced-exploration free Strategies for ...
Document type :
Pré-publication ou Document de travail
Title :
Forced-exploration free Strategies for Unimodal Bandits
Author(s) :
Saber, Hassan [Auteur]
Scool [Scool]
Sequential Learning [SEQUEL]
Ménard, Pierre [Auteur]
Scool [Scool]
Sequential Learning [SEQUEL]
Maillard, Odalric Ambrym [Auteur]
Scool [Scool]
Sequential Learning [SEQUEL]
Scool [Scool]
Sequential Learning [SEQUEL]
Ménard, Pierre [Auteur]
Scool [Scool]
Sequential Learning [SEQUEL]
Maillard, Odalric Ambrym [Auteur]

Scool [Scool]
Sequential Learning [SEQUEL]
English keyword(s) :
Structured Bandits
Indexed Minimum Empirical Divergence
Optimal Strategy
Indexed Minimum Empirical Divergence
Optimal Strategy
HAL domain(s) :
Informatique [cs]/Théorie de l'information [cs.IT]
Statistiques [stat]/Machine Learning [stat.ML]
Statistiques [stat]/Machine Learning [stat.ML]
English abstract : [en]
We consider a multi-armed bandit problem specified by a set of Gaussian or Bernoulli distributions endowed with a unimodal structure. Although this problem has been addressed in the literature (Combes and Proutiere, 2014), ...
Show more >We consider a multi-armed bandit problem specified by a set of Gaussian or Bernoulli distributions endowed with a unimodal structure. Although this problem has been addressed in the literature (Combes and Proutiere, 2014), the state-of-the-art algorithms for such structure make appear a forced-exploration mechanism. We introduce IMED-UB, the first forced-exploration free strategy that exploits the unimodal-structure, by adapting to this setting the Indexed Minimum Empirical Divergence (IMED) strategy introduced by Honda and Takemura (2015). This strategy is proven optimal. We then derive KLUCB-UB, a KLUCB version of IMED-UB, which is also proven optimal. Owing to our proof technique, we are further able to provide a concise finite-time analysis of both strategies in an unified way. Numerical experiments show that both IMED-UB and KLUCB-UB perform similarly in practice and outperform the state-of-the-art algorithms.Show less >
Show more >We consider a multi-armed bandit problem specified by a set of Gaussian or Bernoulli distributions endowed with a unimodal structure. Although this problem has been addressed in the literature (Combes and Proutiere, 2014), the state-of-the-art algorithms for such structure make appear a forced-exploration mechanism. We introduce IMED-UB, the first forced-exploration free strategy that exploits the unimodal-structure, by adapting to this setting the Indexed Minimum Empirical Divergence (IMED) strategy introduced by Honda and Takemura (2015). This strategy is proven optimal. We then derive KLUCB-UB, a KLUCB version of IMED-UB, which is also proven optimal. Owing to our proof technique, we are further able to provide a concise finite-time analysis of both strategies in an unified way. Numerical experiments show that both IMED-UB and KLUCB-UB perform similarly in practice and outperform the state-of-the-art algorithms.Show less >
Language :
Anglais
Collections :
Source :
Files
- https://hal.archives-ouvertes.fr/hal-02883907/document
- Open access
- Access the document
- http://arxiv.org/pdf/2006.16569
- Open access
- Access the document
- https://hal.archives-ouvertes.fr/hal-02883907/document
- Open access
- Access the document
- https://hal.archives-ouvertes.fr/hal-02883907/document
- Open access
- Access the document
- document
- Open access
- Access the document
- Forced-exploration%20free%20Strategies%20for%20Unimodal%20Bandits.pdf
- Open access
- Access the document
- 2006.16569
- Open access
- Access the document
- document
- Open access
- Access the document
- Forced-exploration%20free%20Strategies%20for%20Unimodal%20Bandits.pdf
- Open access
- Access the document