• English
    • français
  • Help
  •  | 
  • Contact
  •  | 
  • About
  •  | 
  • Login
  • HAL portal
  •  | 
  • Pages Pro
  • EN
  •  / 
  • FR
View Item 
  •   LillOA Home
  • Liste des unités
  • Centre de Recherche en Informatique, Signal et Automatique de Lille (CRIStAL) - UMR 9189
  • View Item
  •   LillOA Home
  • Liste des unités
  • Centre de Recherche en Informatique, Signal et Automatique de Lille (CRIStAL) - UMR 9189
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Near-Optimal Collaborative Learning in Bandits
  • BibTeX
  • CSV
  • Excel
  • RIS

Document type :
Communication dans un congrès avec actes
Title :
Near-Optimal Collaborative Learning in Bandits
Author(s) :
Réda, Clémence [Auteur]
Scool [Scool]
Maladies neurodéveloppementales et neurovasculaires [NeuroDiderot (UMR_S_1141 / U1141)]
Vakili, Sattar [Auteur]
Kaufmann, Emilie [Auteur] refId
Scool [Scool]

Conference title :
NeurIPS 2022 - 36th Conference on Neural Information Processing System
City :
New Orleans
Country :
Etats-Unis d'Amérique
Start date of the conference :
2022-12
Journal title :
Advances in Neural Processing Systems
HAL domain(s) :
Statistiques [stat]/Machine Learning [stat.ML]
English abstract : [en]
This paper introduces a general multi-agent bandit model in which each agent is facing a finite set of arms and may communicate with other agents through a central controller in order to identify-in pure exploration-or ...
Show more >
This paper introduces a general multi-agent bandit model in which each agent is facing a finite set of arms and may communicate with other agents through a central controller in order to identify-in pure exploration-or play-in regret minimizationits optimal arm. The twist is that the optimal arm for each agent is the arm with largest expected mixed reward, where the mixed reward of an arm is a weighted sum of the rewards of this arm for all agents. This makes communication between agents often necessary. This general setting allows to recover and extend several recent models for collaborative bandit learning, including the recently proposed federated learning with personalization [30]. In this paper, we provide new lower bounds on the sample complexity of pure exploration and on the regret. We then propose a near-optimal algorithm for pure exploration. This algorithm is based on phased elimination with two novel ingredients: a data-dependent sampling scheme within each phase, aimed at matching a relaxation of the lower bound.Show less >
Language :
Anglais
Peer reviewed article :
Oui
Audience :
Internationale
Popular science :
Non
ANR Project :
Au delà de l'apprentissage séquentiel pour de meilleures prises de décisions
Collections :
  • Centre de Recherche en Informatique, Signal et Automatique de Lille (CRIStAL) - UMR 9189
Source :
Harvested from HAL
Files
Thumbnail
  • https://hal.archives-ouvertes.fr/hal-03825099/document
  • Open access
  • Access the document
Thumbnail
  • http://arxiv.org/pdf/2206.00121
  • Open access
  • Access the document
Université de Lille

Mentions légales
Université de Lille © 2017