Self-Imitation Advantage Learning
Document type :
Communication dans un congrès avec actes
Title :
Self-Imitation Advantage Learning
Author(s) :
Ferret, Johan [Auteur]
Scool [Scool]
Google Brain, Paris
Pietquin, Olivier [Auteur]
Google Brain, Paris
Geist, Matthieu [Auteur]
Google Brain, Paris
Scool [Scool]
Google Brain, Paris
Pietquin, Olivier [Auteur]

Google Brain, Paris
Geist, Matthieu [Auteur]
Google Brain, Paris
Conference title :
AAMAS 2021 - 20th International Conference on Autonomous Agents and Multiagent Systems
City :
Londres / Virtual
Country :
Royaume-Uni
Start date of the conference :
2021-05-03
English keyword(s) :
Reinforcement Learning
Off-Policy Learning
Self-Imitation
Off-Policy Learning
Self-Imitation
HAL domain(s) :
Informatique [cs]/Intelligence artificielle [cs.AI]
English abstract : [en]
Self-imitation learning is a Reinforcement Learning (RL) method that encourages actions whose returns were higher than expected, which helps in hard exploration and sparse reward problems. It was shown to improve the ...
Show more >Self-imitation learning is a Reinforcement Learning (RL) method that encourages actions whose returns were higher than expected, which helps in hard exploration and sparse reward problems. It was shown to improve the performance of on-policy actor-critic methods in several discrete control tasks. Nevertheless, applying self-imitation to the mostly action-value based off-policy RL methods is not straightforward. We propose SAIL, a novel generalization of self-imitation learning for off-policy RL, based on a modification of the Bellman optimality operator that we connect to Advantage Learning. Crucially, our method mitigates the problem of stale returns by choosing the most optimistic return estimate between the observed return and the current action-value for self-imitation. We demonstrate the empirical effectiveness of SAIL on the Arcade Learning Environment, with a focus on hard exploration games.Show less >
Show more >Self-imitation learning is a Reinforcement Learning (RL) method that encourages actions whose returns were higher than expected, which helps in hard exploration and sparse reward problems. It was shown to improve the performance of on-policy actor-critic methods in several discrete control tasks. Nevertheless, applying self-imitation to the mostly action-value based off-policy RL methods is not straightforward. We propose SAIL, a novel generalization of self-imitation learning for off-policy RL, based on a modification of the Bellman optimality operator that we connect to Advantage Learning. Crucially, our method mitigates the problem of stale returns by choosing the most optimistic return estimate between the observed return and the current action-value for self-imitation. We demonstrate the empirical effectiveness of SAIL on the Arcade Learning Environment, with a focus on hard exploration games.Show less >
Language :
Anglais
Peer reviewed article :
Oui
Audience :
Internationale
Popular science :
Non
Collections :
Source :
Files
- https://hal.inria.fr/hal-03159815/document
- Open access
- Access the document
- https://hal.inria.fr/hal-03159815/document
- Open access
- Access the document
- https://hal.inria.fr/hal-03159815/document
- Open access
- Access the document
- document
- Open access
- Access the document
- 2012.11989.pdf
- Open access
- Access the document
- document
- Open access
- Access the document
- 2012.11989.pdf
- Open access
- Access the document