Thompson sampling for one-dimensional ...
Type de document :
Communication dans un congrès avec actes
Titre :
Thompson sampling for one-dimensional exponential family bandits
Auteur(s) :
Korda, Nathaniel [Auteur]
Sequential Learning [SEQUEL]
Kaufmann, Emilie [Auteur]
Laboratoire Traitement et Communication de l'Information [LTCI]
Munos, Rémi [Auteur]
Sequential Learning [SEQUEL]
Sequential Learning [SEQUEL]
Kaufmann, Emilie [Auteur]

Laboratoire Traitement et Communication de l'Information [LTCI]
Munos, Rémi [Auteur]
Sequential Learning [SEQUEL]
Titre de la manifestation scientifique :
Advances in Neural Information Processing Systems
Pays :
Etats-Unis d'Amérique
Date de début de la manifestation scientifique :
2013
Titre de l’ouvrage :
Advances in Neural Information Processing Systems
Date de publication :
2013
Discipline(s) HAL :
Informatique [cs]/Apprentissage [cs.LG]
Résumé en anglais : [en]
Thompson Sampling has been demonstrated in many complex bandit models, however the theoretical guarantees available for the parametric multi-armed bandit are still limited to the Bernoulli case. Here we extend them by ...
Lire la suite >Thompson Sampling has been demonstrated in many complex bandit models, however the theoretical guarantees available for the parametric multi-armed bandit are still limited to the Bernoulli case. Here we extend them by proving asymptotic optimality of the algorithm using the Jeffreys prior for 1-dimensional exponential family bandits. Our proof builds on previous work, but also makes extensive use of closed forms for Kullback-Leibler divergence and Fisher information (and thus Jeffreys prior) available in an exponential family. This allow us to give a finite time exponential concentration inequality for posterior distributions on exponential families that may be of interest in its own right. Moreover our analysis covers some distributions for which no optimistic algorithm has yet been proposed, including heavy-tailed exponential families.Lire moins >
Lire la suite >Thompson Sampling has been demonstrated in many complex bandit models, however the theoretical guarantees available for the parametric multi-armed bandit are still limited to the Bernoulli case. Here we extend them by proving asymptotic optimality of the algorithm using the Jeffreys prior for 1-dimensional exponential family bandits. Our proof builds on previous work, but also makes extensive use of closed forms for Kullback-Leibler divergence and Fisher information (and thus Jeffreys prior) available in an exponential family. This allow us to give a finite time exponential concentration inequality for posterior distributions on exponential families that may be of interest in its own right. Moreover our analysis covers some distributions for which no optimistic algorithm has yet been proposed, including heavy-tailed exponential families.Lire moins >
Langue :
Anglais
Comité de lecture :
Oui
Audience :
Internationale
Vulgarisation :
Non
Collections :
Source :
Fichiers
- https://hal.archives-ouvertes.fr/hal-00923683/document
- Accès libre
- Accéder au document
- https://hal.archives-ouvertes.fr/hal-00923683/document
- Accès libre
- Accéder au document
- document
- Accès libre
- Accéder au document
- nips13-TS.pdf
- Accès libre
- Accéder au document
- document
- Accès libre
- Accéder au document
- nips13-TS.pdf
- Accès libre
- Accéder au document