Optimal Thompson Sampling strategies for ...
Document type :
Autre communication scientifique (congrès sans actes - poster - séminaire...): Communication dans un congrès avec actes
Permalink :
Title :
Optimal Thompson Sampling strategies for support-aware CVaR bandits
Author(s) :
Baudry, Dorian [Auteur]
Scool [Scool]
Gautron, Romain [Auteur]
Scool [Scool]
Kaufmann, Emilie [Auteur]
Scool [Scool]
Maillard, Odalric-Ambrym [Auteur]
Scool [Scool]
Scool [Scool]
Gautron, Romain [Auteur]
Scool [Scool]
Kaufmann, Emilie [Auteur]

Scool [Scool]
Maillard, Odalric-Ambrym [Auteur]

Scool [Scool]
Conference title :
ICML 2021 - International Conference on Machine Learning
City :
Virtual Conference
Country :
Etats-Unis d'Amérique
Start date of the conference :
2021-07-18
HAL domain(s) :
Mathématiques [math]/Statistiques [math.ST]
Statistiques [stat]/Autres [stat.ML]
Statistiques [stat]/Autres [stat.ML]
English abstract : [en]
In this paper we study a multi-arm bandit problem in which the quality of each arm is measured by the Conditional Value at Risk (CVaR) at some level α of the reward distribution. While existing works in this setting mainly ...
Show more >In this paper we study a multi-arm bandit problem in which the quality of each arm is measured by the Conditional Value at Risk (CVaR) at some level α of the reward distribution. While existing works in this setting mainly focus on Upper Confidence Bound algorithms, we introduce a new Thompson Sampling approach for CVaR bandits on bounded rewards that is flexible enough to solve a variety of problems grounded on physical resources. Building on a recent work by Riou and Honda (2020), we introduce B-CVTS for continuous bounded rewards and M-CVTS for multinomial distributions. On the theoretical side, we provide a non-trivial extension of their analysis that enables to theoretically bound their CVaR regret minimization performance. Strikingly, our results show that these strategies are the first to provably achieve asymptotic optimality in CVaR bandits, matching the corresponding asymptotic lower bounds for this setting. Further, we illustrate empirically the benefit of Thompson Sampling approaches both in a realistic environment simulating a use-case in agriculture and on various synthetic examples.Show less >
Show more >In this paper we study a multi-arm bandit problem in which the quality of each arm is measured by the Conditional Value at Risk (CVaR) at some level α of the reward distribution. While existing works in this setting mainly focus on Upper Confidence Bound algorithms, we introduce a new Thompson Sampling approach for CVaR bandits on bounded rewards that is flexible enough to solve a variety of problems grounded on physical resources. Building on a recent work by Riou and Honda (2020), we introduce B-CVTS for continuous bounded rewards and M-CVTS for multinomial distributions. On the theoretical side, we provide a non-trivial extension of their analysis that enables to theoretically bound their CVaR regret minimization performance. Strikingly, our results show that these strategies are the first to provably achieve asymptotic optimality in CVaR bandits, matching the corresponding asymptotic lower bounds for this setting. Further, we illustrate empirically the benefit of Thompson Sampling approaches both in a realistic environment simulating a use-case in agriculture and on various synthetic examples.Show less >
Language :
Anglais
Peer reviewed article :
Oui
Audience :
Internationale
Popular science :
Non
ANR Project :
Collections :
Source :
Submission date :
2021-12-11T02:00:32Z
Files
- https://hal.archives-ouvertes.fr/hal-03472593/document
- Open access
- Access the document