Optimal Thompson Sampling strategies for ...
Type de document :
Communication dans un congrès avec actes
Titre :
Optimal Thompson Sampling strategies for support-aware CVaR bandits
Auteur(s) :
Baudry, Dorian [Auteur]
Scool [Scool]
Gautron, Romain [Auteur]
Agroécologie et Intensification Durables des cultures annuelles [UPR AIDA]
Kaufmann, Emilie [Auteur]
Scool [Scool]
Maillard, Odalric Ambrym [Auteur]
Scool [Scool]
Scool [Scool]
Gautron, Romain [Auteur]
Agroécologie et Intensification Durables des cultures annuelles [UPR AIDA]
Kaufmann, Emilie [Auteur]
Scool [Scool]
Maillard, Odalric Ambrym [Auteur]
Scool [Scool]
Titre de la manifestation scientifique :
38th International Conference on Machine Learning
Ville :
Virtual
Pays :
Etats-Unis d'Amérique
Date de début de la manifestation scientifique :
2021-07-18
Titre de la revue :
proceedings of machine learning research
Discipline(s) HAL :
Informatique [cs]
Statistiques [stat]/Machine Learning [stat.ML]
Statistiques [stat]/Machine Learning [stat.ML]
Résumé en anglais : [en]
In this paper we study a multi-arm bandit problem in which the quality of each arm is measured by the Conditional Value at Risk (CVaR) at some level alpha of the reward distribution. While existing works in this setting ...
Lire la suite >In this paper we study a multi-arm bandit problem in which the quality of each arm is measured by the Conditional Value at Risk (CVaR) at some level alpha of the reward distribution. While existing works in this setting mainly focus on Upper Confidence Bound algorithms, we introduce a new Thompson Sampling approach for CVaR bandits on bounded rewards that is flexible enough to solve a variety of problems grounded on physical resources. Building on a recent work by Riou & Honda (2020), we introduce B-CVTS for continuous bounded rewards and M-CVTS for multinomial distributions. On the theoretical side, we provide a non-trivial extension of their analysis that enables to theoretically bound their CVaR regret minimization performance. Strikingly, our results show that these strategies are the first to provably achieve asymptotic optimality in CVaR bandits, matching the corresponding asymptotic lower bounds for this setting. Further, we illustrate empirically the benefit of Thompson Sampling approaches both in a realistic environment simulating a use-case in agriculture and on various synthetic examples.Lire moins >
Lire la suite >In this paper we study a multi-arm bandit problem in which the quality of each arm is measured by the Conditional Value at Risk (CVaR) at some level alpha of the reward distribution. While existing works in this setting mainly focus on Upper Confidence Bound algorithms, we introduce a new Thompson Sampling approach for CVaR bandits on bounded rewards that is flexible enough to solve a variety of problems grounded on physical resources. Building on a recent work by Riou & Honda (2020), we introduce B-CVTS for continuous bounded rewards and M-CVTS for multinomial distributions. On the theoretical side, we provide a non-trivial extension of their analysis that enables to theoretically bound their CVaR regret minimization performance. Strikingly, our results show that these strategies are the first to provably achieve asymptotic optimality in CVaR bandits, matching the corresponding asymptotic lower bounds for this setting. Further, we illustrate empirically the benefit of Thompson Sampling approaches both in a realistic environment simulating a use-case in agriculture and on various synthetic examples.Lire moins >
Langue :
Anglais
Comité de lecture :
Oui
Audience :
Internationale
Vulgarisation :
Non
Projet ANR :
Commentaire :
Presented at the Thirty-eighth International Conference on Machine Learning (ICML 2021)
Collections :
Source :
Fichiers
- http://arxiv.org/pdf/2012.05754
- Accès libre
- Accéder au document
- https://hal.archives-ouvertes.fr/hal-03447244/document
- Accès libre
- Accéder au document
- https://hal.archives-ouvertes.fr/hal-03447244/document
- Accès libre
- Accéder au document
- document
- Accès libre
- Accéder au document
- main.pdf
- Accès libre
- Accéder au document
- 2012.05754
- Accès libre
- Accéder au document