Near-Optimal Collaborative Learning in Bandits
Type de document :
Communication dans un congrès avec actes
Titre :
Near-Optimal Collaborative Learning in Bandits
Auteur(s) :
Réda, Clémence [Auteur]
Scool [Scool]
Maladies neurodéveloppementales et neurovasculaires [NeuroDiderot (UMR_S_1141 / U1141)]
Vakili, Sattar [Auteur]
Kaufmann, Emilie [Auteur]
Scool [Scool]
Scool [Scool]
Maladies neurodéveloppementales et neurovasculaires [NeuroDiderot (UMR_S_1141 / U1141)]
Vakili, Sattar [Auteur]
Kaufmann, Emilie [Auteur]
Scool [Scool]
Titre de la manifestation scientifique :
NeurIPS 2022 - 36th Conference on Neural Information Processing System
Ville :
New Orleans
Pays :
Etats-Unis d'Amérique
Date de début de la manifestation scientifique :
2022-12
Titre de la revue :
Advances in Neural Processing Systems
Discipline(s) HAL :
Statistiques [stat]/Machine Learning [stat.ML]
Résumé en anglais : [en]
This paper introduces a general multi-agent bandit model in which each agent is facing a finite set of arms and may communicate with other agents through a central controller in order to identify-in pure exploration-or ...
Lire la suite >This paper introduces a general multi-agent bandit model in which each agent is facing a finite set of arms and may communicate with other agents through a central controller in order to identify-in pure exploration-or play-in regret minimizationits optimal arm. The twist is that the optimal arm for each agent is the arm with largest expected mixed reward, where the mixed reward of an arm is a weighted sum of the rewards of this arm for all agents. This makes communication between agents often necessary. This general setting allows to recover and extend several recent models for collaborative bandit learning, including the recently proposed federated learning with personalization [30]. In this paper, we provide new lower bounds on the sample complexity of pure exploration and on the regret. We then propose a near-optimal algorithm for pure exploration. This algorithm is based on phased elimination with two novel ingredients: a data-dependent sampling scheme within each phase, aimed at matching a relaxation of the lower bound.Lire moins >
Lire la suite >This paper introduces a general multi-agent bandit model in which each agent is facing a finite set of arms and may communicate with other agents through a central controller in order to identify-in pure exploration-or play-in regret minimizationits optimal arm. The twist is that the optimal arm for each agent is the arm with largest expected mixed reward, where the mixed reward of an arm is a weighted sum of the rewards of this arm for all agents. This makes communication between agents often necessary. This general setting allows to recover and extend several recent models for collaborative bandit learning, including the recently proposed federated learning with personalization [30]. In this paper, we provide new lower bounds on the sample complexity of pure exploration and on the regret. We then propose a near-optimal algorithm for pure exploration. This algorithm is based on phased elimination with two novel ingredients: a data-dependent sampling scheme within each phase, aimed at matching a relaxation of the lower bound.Lire moins >
Langue :
Anglais
Comité de lecture :
Oui
Audience :
Internationale
Vulgarisation :
Non
Collections :
Source :
Fichiers
- https://hal.archives-ouvertes.fr/hal-03825099/document
- Accès libre
- Accéder au document
- http://arxiv.org/pdf/2206.00121
- Accès libre
- Accéder au document
- document
- Accès libre
- Accéder au document
- RVK22.pdf
- Accès libre
- Accéder au document
- 2206.00121
- Accès libre
- Accéder au document