Adversarially Guided Actor-Critic
Document type :
Communication dans un congrès avec actes
Title :
Adversarially Guided Actor-Critic
Author(s) :
Flet Berliac, Yannis [Auteur]
Scool [Scool]
Ferret, Johan [Auteur]
Scool [Scool]
Google Brain, Paris
Pietquin, Olivier [Auteur]
Google Brain, Paris
Preux, Philippe [Auteur]
Scool [Scool]
Geist, Matthieu [Auteur]
Google Brain, Paris
Scool [Scool]
Ferret, Johan [Auteur]
Scool [Scool]
Google Brain, Paris
Pietquin, Olivier [Auteur]

Google Brain, Paris
Preux, Philippe [Auteur]
Scool [Scool]
Geist, Matthieu [Auteur]
Google Brain, Paris
Conference title :
ICLR 2021 - International Conference on Learning Representations
City :
Vienna / Virtual
Country :
Autriche
Start date of the conference :
2021-05-04
HAL domain(s) :
Informatique [cs]/Apprentissage [cs.LG]
Statistiques [stat]/Machine Learning [stat.ML]
Statistiques [stat]/Machine Learning [stat.ML]
English abstract : [en]
Despite definite success in deep reinforcement learning problems, actor-critic algorithms are still confronted with sample inefficiency in complex environments, particularly in tasks where efficient exploration is a ...
Show more >Despite definite success in deep reinforcement learning problems, actor-critic algorithms are still confronted with sample inefficiency in complex environments, particularly in tasks where efficient exploration is a bottleneck. These methods consider a policy (the actor) and a value function (the critic) whose respective losses are built using different motivations and approaches. This paper introduces a third protagonist: the adversary. While the adversary mimics the actor by minimizing the KL-divergence between their respective action distributions, the actor, in addition to learning to solve the task, tries to differentiate itself from the adversary predictions. This novel objective stimulates the actor to follow strategies that could not have been correctly predicted from previous trajectories, making its behavior innovative in tasks where the reward is extremely rare. Our experimental analysis shows that the resulting Adversarially Guided Actor-Critic (AGAC) algorithm leads to more exhaustive exploration. Notably, AGAC outperforms current state-of-the-art methods on a set of various hard-exploration and procedurally-generated tasks.Show less >
Show more >Despite definite success in deep reinforcement learning problems, actor-critic algorithms are still confronted with sample inefficiency in complex environments, particularly in tasks where efficient exploration is a bottleneck. These methods consider a policy (the actor) and a value function (the critic) whose respective losses are built using different motivations and approaches. This paper introduces a third protagonist: the adversary. While the adversary mimics the actor by minimizing the KL-divergence between their respective action distributions, the actor, in addition to learning to solve the task, tries to differentiate itself from the adversary predictions. This novel objective stimulates the actor to follow strategies that could not have been correctly predicted from previous trajectories, making its behavior innovative in tasks where the reward is extremely rare. Our experimental analysis shows that the resulting Adversarially Guided Actor-Critic (AGAC) algorithm leads to more exhaustive exploration. Notably, AGAC outperforms current state-of-the-art methods on a set of various hard-exploration and procedurally-generated tasks.Show less >
Language :
Anglais
Peer reviewed article :
Oui
Audience :
Internationale
Popular science :
Non
Collections :
Source :
Files
- https://hal.inria.fr/hal-03167169/document
- Open access
- Access the document
- http://arxiv.org/pdf/2102.04376
- Open access
- Access the document
- https://hal.inria.fr/hal-03167169/document
- Open access
- Access the document
- https://hal.inria.fr/hal-03167169/document
- Open access
- Access the document