Cheap Bandits
Document type :
Communication dans un congrès avec actes
Title :
Cheap Bandits
Author(s) :
Hanawal, Manjesh Kumar [Auteur]
Department of Electrical and Computer Engineering [Boston University] [ECE]
Saligrama, Venkatesh [Auteur]
Department of Electrical and Computer Engineering [Boston University] [ECE]
Valko, Michal [Auteur]
Sequential Learning [SEQUEL]
Munos, Rémi [Auteur]
Sequential Learning [SEQUEL]
Department of Electrical and Computer Engineering [Boston University] [ECE]
Saligrama, Venkatesh [Auteur]
Department of Electrical and Computer Engineering [Boston University] [ECE]
Valko, Michal [Auteur]

Sequential Learning [SEQUEL]
Munos, Rémi [Auteur]
Sequential Learning [SEQUEL]
Conference title :
International Conference on Machine Learning
City :
Lille
Country :
France
Start date of the conference :
2015
HAL domain(s) :
Statistiques [stat]/Machine Learning [stat.ML]
Informatique [cs]/Recherche d'information [cs.IR]
Informatique [cs]/Recherche d'information [cs.IR]
English abstract : [en]
We consider stochastic sequential learning problems where the learner can observe the average reward of several actions. Such a setting is interesting in many applications involving monitoring and surveillance, where the ...
Show more >We consider stochastic sequential learning problems where the learner can observe the average reward of several actions. Such a setting is interesting in many applications involving monitoring and surveillance, where the set of the actions to observe represent some (geographical) area. The importance of this setting is that in these applications , it is actually cheaper to observe average reward of a group of actions rather than the reward of a single action. We show that when the reward is smooth over a given graph representing the neighboring actions, we can maximize the cumulative reward of learning while minimizing the sensing cost. In this paper we propose CheapUCB, an algorithm that matches the regret guarantees of the known algorithms for this setting and at the same time guarantees a linear cost again over them. As a by-product of our analysis , we establish a ⌦(p dT) lower bound on the cumulative regret of spectral bandits for a class of graphs with effective dimension d.Show less >
Show more >We consider stochastic sequential learning problems where the learner can observe the average reward of several actions. Such a setting is interesting in many applications involving monitoring and surveillance, where the set of the actions to observe represent some (geographical) area. The importance of this setting is that in these applications , it is actually cheaper to observe average reward of a group of actions rather than the reward of a single action. We show that when the reward is smooth over a given graph representing the neighboring actions, we can maximize the cumulative reward of learning while minimizing the sensing cost. In this paper we propose CheapUCB, an algorithm that matches the regret guarantees of the known algorithms for this setting and at the same time guarantees a linear cost again over them. As a by-product of our analysis , we establish a ⌦(p dT) lower bound on the cumulative regret of spectral bandits for a class of graphs with effective dimension d.Show less >
Language :
Anglais
Peer reviewed article :
Oui
Audience :
Internationale
Popular science :
Non
Collections :
Source :
Files
- https://hal.inria.fr/hal-01153540/document
- Open access
- Access the document
- https://hal.inria.fr/hal-01153540/document
- Open access
- Access the document
- https://hal.inria.fr/hal-01153540/document
- Open access
- Access the document
- document
- Open access
- Access the document
- hanawal2015cheap.pdf
- Open access
- Access the document
- document
- Open access
- Access the document
- hanawal2015cheap.pdf
- Open access
- Access the document