Fast active learning for pure exploration ...
Type de document :
Communication dans un congrès avec actes
Titre :
Fast active learning for pure exploration in reinforcement learning
Auteur(s) :
Ménard, Pierre [Auteur]
Otto-von-Guericke-Universität Magdeburg = Otto-von-Guericke University [Magdeburg] [OVGU]
Domingues, Omar Darwiche [Auteur]
Scool [Scool]
Kaufmann, Emilie [Auteur]
Scool [Scool]
Jonsson, Anders [Auteur]
Leurent, Edouard [Auteur]
RENAULT
Valko, Michal [Auteur]
DeepMind [Paris]
Otto-von-Guericke-Universität Magdeburg = Otto-von-Guericke University [Magdeburg] [OVGU]
Domingues, Omar Darwiche [Auteur]
Scool [Scool]
Kaufmann, Emilie [Auteur]
Scool [Scool]
Jonsson, Anders [Auteur]
Leurent, Edouard [Auteur]
RENAULT
Valko, Michal [Auteur]
DeepMind [Paris]
Titre de la manifestation scientifique :
International Conference on Machine Learning
Ville :
Vienna
Pays :
Autriche
Date de début de la manifestation scientifique :
2021-07-18
Discipline(s) HAL :
Statistiques [stat]/Machine Learning [stat.ML]
Résumé en anglais : [en]
Realistic environments often provide agents with very limited feedback. When the environment is initially unknown, the feedback, in the beginning, can be completely absent, and the agents may first choose to devote all ...
Lire la suite >Realistic environments often provide agents with very limited feedback. When the environment is initially unknown, the feedback, in the beginning, can be completely absent, and the agents may first choose to devote all their effort on <i>exploring efficiently</i>. The exploration remains a challenge while it has been addressed with many hand-tuned heuristics with different levels of generality on one side, and a few theoretically backed exploration strategies on the other. Many of them are incarnated by <i>intrinsic motivation</i> and in particular <i>explorations bonuses</i>. A common rule of thumb for exploration bonuses is to use 1/ √ n bonus that is added to the empirical estimates of the reward, where $n$ is a number of times this particular state (or a state-action pair) was visited. We show that, surprisingly, for a pure-exploration objective of reward-free exploration, bonuses that scale with 1/n bring faster learning rates, improving the known upper bounds with respect to the dependence on the horizon $H$. Furthermore, we show that with an improved analysis of the stopping time, we can improve by a factor $H$ the sample complexity in the <i>best-policy identification</i> setting, which is another pure-exploration objective, where the environment provides rewards but the agent is not penalized for its behavior during the exploration phase.Lire moins >
Lire la suite >Realistic environments often provide agents with very limited feedback. When the environment is initially unknown, the feedback, in the beginning, can be completely absent, and the agents may first choose to devote all their effort on <i>exploring efficiently</i>. The exploration remains a challenge while it has been addressed with many hand-tuned heuristics with different levels of generality on one side, and a few theoretically backed exploration strategies on the other. Many of them are incarnated by <i>intrinsic motivation</i> and in particular <i>explorations bonuses</i>. A common rule of thumb for exploration bonuses is to use 1/ √ n bonus that is added to the empirical estimates of the reward, where $n$ is a number of times this particular state (or a state-action pair) was visited. We show that, surprisingly, for a pure-exploration objective of reward-free exploration, bonuses that scale with 1/n bring faster learning rates, improving the known upper bounds with respect to the dependence on the horizon $H$. Furthermore, we show that with an improved analysis of the stopping time, we can improve by a factor $H$ the sample complexity in the <i>best-policy identification</i> setting, which is another pure-exploration objective, where the environment provides rewards but the agent is not penalized for its behavior during the exploration phase.Lire moins >
Langue :
Anglais
Comité de lecture :
Oui
Audience :
Internationale
Vulgarisation :
Non
Collections :
Source :
Fichiers
- https://hal.inria.fr/hal-02906985v3/document
- Accès libre
- Accéder au document
- https://hal.inria.fr/hal-02906985v3/document
- Accès libre
- Accéder au document
- https://hal.inria.fr/hal-02906985v3/document
- Accès libre
- Accéder au document
- document
- Accès libre
- Accéder au document
- menard2021fast.pdf
- Accès libre
- Accéder au document