Non-Asymptotic Pure Exploration by Solving Games
Document type :
Pré-publication ou Document de travail
Title :
Non-Asymptotic Pure Exploration by Solving Games
Author(s) :
Degenne, Rémy [Auteur]
Centre de Mathématiques et de Leurs Applications [CMLA]
Koolen, Wouter [Auteur]
Centrum Wiskunde & Informatica [CWI]
Ménard, Pierre [Auteur]
Sequential Learning [SEQUEL]
Centre de Mathématiques et de Leurs Applications [CMLA]
Koolen, Wouter [Auteur]
Centrum Wiskunde & Informatica [CWI]
Ménard, Pierre [Auteur]
Sequential Learning [SEQUEL]
HAL domain(s) :
Statistiques [stat]/Machine Learning [stat.ML]
English abstract : [en]
Pure exploration (aka active testing) is the fundamental task of sequentially gathering information to answer a query about a stochastic environment. Good algorithms make few mistakes and take few samples. Lower bounds ...
Show more >Pure exploration (aka active testing) is the fundamental task of sequentially gathering information to answer a query about a stochastic environment. Good algorithms make few mistakes and take few samples. Lower bounds (for multi-armed bandit models with arms in an exponential family) reveal that the sample complexity is determined by the solution to an optimisation problem. The existing state of the art algorithms achieve asymptotic optimality by solving a plug-in estimate of that optimisation problem at each step. We interpret the optimisation problem as an unknown game, and propose sampling rules based on iterative strategies to estimate and converge to its saddle point. We apply no-regret learners to obtain the first finite confidence guarantees that are adapted to the exponential family and which apply to any pure exploration query and bandit structure. Moreover, our algorithms only use a best response oracle instead of fully solving the optimisation problem.Show less >
Show more >Pure exploration (aka active testing) is the fundamental task of sequentially gathering information to answer a query about a stochastic environment. Good algorithms make few mistakes and take few samples. Lower bounds (for multi-armed bandit models with arms in an exponential family) reveal that the sample complexity is determined by the solution to an optimisation problem. The existing state of the art algorithms achieve asymptotic optimality by solving a plug-in estimate of that optimisation problem at each step. We interpret the optimisation problem as an unknown game, and propose sampling rules based on iterative strategies to estimate and converge to its saddle point. We apply no-regret learners to obtain the first finite confidence guarantees that are adapted to the exponential family and which apply to any pure exploration query and bandit structure. Moreover, our algorithms only use a best response oracle instead of fully solving the optimisation problem.Show less >
Language :
Anglais
Collections :
Source :
Files
- https://hal.archives-ouvertes.fr/hal-02402665/document
- Open access
- Access the document
- https://hal.archives-ouvertes.fr/hal-02402665/document
- Open access
- Access the document
- https://hal.archives-ouvertes.fr/hal-02402665/document
- Open access
- Access the document
- document
- Open access
- Access the document
- main.pdf
- Open access
- Access the document
- document
- Open access
- Access the document
- main.pdf
- Open access
- Access the document