• English
    • français
  • Help
  •  | 
  • Contact
  •  | 
  • About
  •  | 
  • Login
  • HAL portal
  •  | 
  • Pages Pro
  • EN
  •  / 
  • FR
View Item 
  •   LillOA Home
  • Liste des unités
  • Centre de Recherche en Informatique, Signal et Automatique de Lille (CRIStAL) - UMR 9189
  • View Item
  •   LillOA Home
  • Liste des unités
  • Centre de Recherche en Informatique, Signal et Automatique de Lille (CRIStAL) - UMR 9189
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Adversarially Guided Actor-Critic
  • BibTeX
  • CSV
  • Excel
  • RIS

Document type :
Communication dans un congrès avec actes
Title :
Adversarially Guided Actor-Critic
Author(s) :
Flet Berliac, Yannis [Auteur]
Scool [Scool]
Ferret, Johan [Auteur]
Scool [Scool]
Google Brain, Paris
Pietquin, Olivier [Auteur] refId
Google Brain, Paris
Preux, Philippe [Auteur]
Scool [Scool]
Geist, Matthieu [Auteur]
Google Brain, Paris
Conference title :
ICLR 2021 - International Conference on Learning Representations
City :
Vienna / Virtual
Country :
Autriche
Start date of the conference :
2021-05-04
HAL domain(s) :
Informatique [cs]/Apprentissage [cs.LG]
Statistiques [stat]/Machine Learning [stat.ML]
English abstract : [en]
Despite definite success in deep reinforcement learning problems, actor-critic algorithms are still confronted with sample inefficiency in complex environments, particularly in tasks where efficient exploration is a ...
Show more >
Despite definite success in deep reinforcement learning problems, actor-critic algorithms are still confronted with sample inefficiency in complex environments, particularly in tasks where efficient exploration is a bottleneck. These methods consider a policy (the actor) and a value function (the critic) whose respective losses are built using different motivations and approaches. This paper introduces a third protagonist: the adversary. While the adversary mimics the actor by minimizing the KL-divergence between their respective action distributions, the actor, in addition to learning to solve the task, tries to differentiate itself from the adversary predictions. This novel objective stimulates the actor to follow strategies that could not have been correctly predicted from previous trajectories, making its behavior innovative in tasks where the reward is extremely rare. Our experimental analysis shows that the resulting Adversarially Guided Actor-Critic (AGAC) algorithm leads to more exhaustive exploration. Notably, AGAC outperforms current state-of-the-art methods on a set of various hard-exploration and procedurally-generated tasks.Show less >
Language :
Anglais
Peer reviewed article :
Oui
Audience :
Internationale
Popular science :
Non
Collections :
  • Centre de Recherche en Informatique, Signal et Automatique de Lille (CRIStAL) - UMR 9189
Source :
Harvested from HAL
Files
Thumbnail
  • https://hal.inria.fr/hal-03167169/document
  • Open access
  • Access the document
Thumbnail
  • http://arxiv.org/pdf/2102.04376
  • Open access
  • Access the document
Thumbnail
  • https://hal.inria.fr/hal-03167169/document
  • Open access
  • Access the document
Thumbnail
  • https://hal.inria.fr/hal-03167169/document
  • Open access
  • Access the document
Université de Lille

Mentions légales
Université de Lille © 2017