Near-Optimal Collaborative Learning in Bandits
Document type :
Communication dans un congrès avec actes
Title :
Near-Optimal Collaborative Learning in Bandits
Author(s) :
Réda, Clémence [Auteur]
Scool [Scool]
Maladies neurodéveloppementales et neurovasculaires [NeuroDiderot (UMR_S_1141 / U1141)]
Vakili, Sattar [Auteur]
Kaufmann, Emilie [Auteur]
Scool [Scool]
Scool [Scool]
Maladies neurodéveloppementales et neurovasculaires [NeuroDiderot (UMR_S_1141 / U1141)]
Vakili, Sattar [Auteur]
Kaufmann, Emilie [Auteur]

Scool [Scool]
Conference title :
NeurIPS 2022 - 36th Conference on Neural Information Processing System
City :
New Orleans
Country :
Etats-Unis d'Amérique
Start date of the conference :
2022-12
Journal title :
Advances in Neural Processing Systems
HAL domain(s) :
Statistiques [stat]/Machine Learning [stat.ML]
English abstract : [en]
This paper introduces a general multi-agent bandit model in which each agent is facing a finite set of arms and may communicate with other agents through a central controller in order to identify-in pure exploration-or ...
Show more >This paper introduces a general multi-agent bandit model in which each agent is facing a finite set of arms and may communicate with other agents through a central controller in order to identify-in pure exploration-or play-in regret minimizationits optimal arm. The twist is that the optimal arm for each agent is the arm with largest expected mixed reward, where the mixed reward of an arm is a weighted sum of the rewards of this arm for all agents. This makes communication between agents often necessary. This general setting allows to recover and extend several recent models for collaborative bandit learning, including the recently proposed federated learning with personalization [30]. In this paper, we provide new lower bounds on the sample complexity of pure exploration and on the regret. We then propose a near-optimal algorithm for pure exploration. This algorithm is based on phased elimination with two novel ingredients: a data-dependent sampling scheme within each phase, aimed at matching a relaxation of the lower bound.Show less >
Show more >This paper introduces a general multi-agent bandit model in which each agent is facing a finite set of arms and may communicate with other agents through a central controller in order to identify-in pure exploration-or play-in regret minimizationits optimal arm. The twist is that the optimal arm for each agent is the arm with largest expected mixed reward, where the mixed reward of an arm is a weighted sum of the rewards of this arm for all agents. This makes communication between agents often necessary. This general setting allows to recover and extend several recent models for collaborative bandit learning, including the recently proposed federated learning with personalization [30]. In this paper, we provide new lower bounds on the sample complexity of pure exploration and on the regret. We then propose a near-optimal algorithm for pure exploration. This algorithm is based on phased elimination with two novel ingredients: a data-dependent sampling scheme within each phase, aimed at matching a relaxation of the lower bound.Show less >
Language :
Anglais
Peer reviewed article :
Oui
Audience :
Internationale
Popular science :
Non
Collections :
Source :
Files
- https://hal.archives-ouvertes.fr/hal-03825099/document
- Open access
- Access the document
- http://arxiv.org/pdf/2206.00121
- Open access
- Access the document
- document
- Open access
- Access the document
- RVK22.pdf
- Open access
- Access the document
- 2206.00121
- Open access
- Access the document