Self-Imitation Advantage Learning
Type de document :
Communication dans un congrès avec actes
Titre :
Self-Imitation Advantage Learning
Auteur(s) :
Ferret, Johan [Auteur]
Scool [Scool]
Google Brain, Paris
Pietquin, Olivier [Auteur]
Google Brain, Paris
Geist, Matthieu [Auteur]
Google Brain, Paris
Scool [Scool]
Google Brain, Paris
Pietquin, Olivier [Auteur]

Google Brain, Paris
Geist, Matthieu [Auteur]
Google Brain, Paris
Titre de la manifestation scientifique :
AAMAS 2021 - 20th International Conference on Autonomous Agents and Multiagent Systems
Ville :
Londres / Virtual
Pays :
Royaume-Uni
Date de début de la manifestation scientifique :
2021-05-03
Mot(s)-clé(s) en anglais :
Reinforcement Learning
Off-Policy Learning
Self-Imitation
Off-Policy Learning
Self-Imitation
Discipline(s) HAL :
Informatique [cs]/Intelligence artificielle [cs.AI]
Résumé en anglais : [en]
Self-imitation learning is a Reinforcement Learning (RL) method that encourages actions whose returns were higher than expected, which helps in hard exploration and sparse reward problems. It was shown to improve the ...
Lire la suite >Self-imitation learning is a Reinforcement Learning (RL) method that encourages actions whose returns were higher than expected, which helps in hard exploration and sparse reward problems. It was shown to improve the performance of on-policy actor-critic methods in several discrete control tasks. Nevertheless, applying self-imitation to the mostly action-value based off-policy RL methods is not straightforward. We propose SAIL, a novel generalization of self-imitation learning for off-policy RL, based on a modification of the Bellman optimality operator that we connect to Advantage Learning. Crucially, our method mitigates the problem of stale returns by choosing the most optimistic return estimate between the observed return and the current action-value for self-imitation. We demonstrate the empirical effectiveness of SAIL on the Arcade Learning Environment, with a focus on hard exploration games.Lire moins >
Lire la suite >Self-imitation learning is a Reinforcement Learning (RL) method that encourages actions whose returns were higher than expected, which helps in hard exploration and sparse reward problems. It was shown to improve the performance of on-policy actor-critic methods in several discrete control tasks. Nevertheless, applying self-imitation to the mostly action-value based off-policy RL methods is not straightforward. We propose SAIL, a novel generalization of self-imitation learning for off-policy RL, based on a modification of the Bellman optimality operator that we connect to Advantage Learning. Crucially, our method mitigates the problem of stale returns by choosing the most optimistic return estimate between the observed return and the current action-value for self-imitation. We demonstrate the empirical effectiveness of SAIL on the Arcade Learning Environment, with a focus on hard exploration games.Lire moins >
Langue :
Anglais
Comité de lecture :
Oui
Audience :
Internationale
Vulgarisation :
Non
Collections :
Source :
Fichiers
- https://hal.inria.fr/hal-03159815/document
- Accès libre
- Accéder au document
- https://hal.inria.fr/hal-03159815/document
- Accès libre
- Accéder au document
- https://hal.inria.fr/hal-03159815/document
- Accès libre
- Accéder au document
- document
- Accès libre
- Accéder au document
- 2012.11989.pdf
- Accès libre
- Accéder au document
- document
- Accès libre
- Accéder au document
- 2012.11989.pdf
- Accès libre
- Accéder au document