Statistical efficiency of Thompson sampling ...
Type de document :
Communication dans un congrès avec actes
Titre :
Statistical efficiency of Thompson sampling for combinatorial semi-bandits
Auteur(s) :
Perrault, Pierre [Auteur]
Scool [Scool]
Ecole Normale Supérieure Paris-Saclay [ENS Paris Saclay]
Boursier, Etienne [Auteur]
Ecole Normale Supérieure Paris-Saclay [ENS Paris Saclay]
Perchet, Vianney [Auteur]
Centre de Recherche en Économie et Statistique [CREST]
Criteo AI Lab
Valko, Michal [Auteur]
DeepMind [Paris]
Scool [Scool]
Ecole Normale Supérieure Paris-Saclay [ENS Paris Saclay]
Boursier, Etienne [Auteur]
Ecole Normale Supérieure Paris-Saclay [ENS Paris Saclay]
Perchet, Vianney [Auteur]
Centre de Recherche en Économie et Statistique [CREST]
Criteo AI Lab
Valko, Michal [Auteur]

DeepMind [Paris]
Titre de la manifestation scientifique :
Neural Information Processing Systems
Ville :
Virtual
Pays :
France
Date de début de la manifestation scientifique :
2020-12-06
Discipline(s) HAL :
Informatique [cs]/Intelligence artificielle [cs.AI]
Mathématiques [math]/Statistiques [math.ST]
Mathématiques [math]/Statistiques [math.ST]
Résumé en anglais : [en]
We investigate stochastic combinatorial multi-armed bandit with semi-bandit feedback (CMAB). In CMAB, the question of the existence of an efficient policy with an optimal asymptotic regret (up to a factor poly-logarithmic ...
Lire la suite >We investigate stochastic combinatorial multi-armed bandit with semi-bandit feedback (CMAB). In CMAB, the question of the existence of an efficient policy with an optimal asymptotic regret (up to a factor poly-logarithmic with the action size) is still open for many families of distributions, including mutually independent outcomes, and more generally the multivariate sub-Gaussian family. We propose to answer the above question for these two families by analyzing variants of the Combinatorial Thompson Sampling policy (CTS). For mutually independent outcomes in $[0,1]$, we propose a tight analysis of CTS using Beta priors. We then look at the more general setting of multivariate sub-Gaussian outcomes and propose a tight analysis of CTS using Gaussian priors. This last result gives us an alternative to the Efficient Sampling for Combinatorial Bandit policy (ESCB), which, although optimal, is not computationally efficient.Lire moins >
Lire la suite >We investigate stochastic combinatorial multi-armed bandit with semi-bandit feedback (CMAB). In CMAB, the question of the existence of an efficient policy with an optimal asymptotic regret (up to a factor poly-logarithmic with the action size) is still open for many families of distributions, including mutually independent outcomes, and more generally the multivariate sub-Gaussian family. We propose to answer the above question for these two families by analyzing variants of the Combinatorial Thompson Sampling policy (CTS). For mutually independent outcomes in $[0,1]$, we propose a tight analysis of CTS using Beta priors. We then look at the more general setting of multivariate sub-Gaussian outcomes and propose a tight analysis of CTS using Gaussian priors. This last result gives us an alternative to the Efficient Sampling for Combinatorial Bandit policy (ESCB), which, although optimal, is not computationally efficient.Lire moins >
Langue :
Anglais
Comité de lecture :
Oui
Audience :
Internationale
Vulgarisation :
Non
Collections :
Source :
Fichiers
- http://arxiv.org/pdf/2006.06613
- Accès libre
- Accéder au document
- 2006.06613
- Accès libre
- Accéder au document