Offline Reinforcement Learning with ...
Type de document :
Communication dans un congrès avec actes
Titre :
Offline Reinforcement Learning with Pseudometric Learning
Auteur(s) :
Dadashi, Robert [Auteur]
Google Research [Paris]
Rezaeifar, Shideh [Auteur]
Université de Genève = University of Geneva [UNIGE]
Vieillard, Nino [Auteur]
Google Research [Paris]
Biology, genetics and statistics [BIGS]
Institut Élie Cartan de Lorraine [IECL]
Hussenot, Léonard [Auteur]
Google Research [Paris]
Scool [Scool]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Pietquin, Olivier [Auteur]
Google Research [Paris]
Geist, Matthieu [Auteur]
Google Research [Paris]
Google Research [Paris]
Rezaeifar, Shideh [Auteur]
Université de Genève = University of Geneva [UNIGE]
Vieillard, Nino [Auteur]
Google Research [Paris]
Biology, genetics and statistics [BIGS]
Institut Élie Cartan de Lorraine [IECL]
Hussenot, Léonard [Auteur]
Google Research [Paris]
Scool [Scool]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Pietquin, Olivier [Auteur]
Google Research [Paris]
Geist, Matthieu [Auteur]
Google Research [Paris]
Titre de la manifestation scientifique :
ICML 2021 - 38th International Conference on Machine Learning
Ville :
virtual
Pays :
France
Date de début de la manifestation scientifique :
2021-06-18
Discipline(s) HAL :
Informatique [cs]/Apprentissage [cs.LG]
Résumé en anglais : [en]
Offline Reinforcement Learning methods seek to learn a policy from logged transitions of an environment, without any interaction. In the presence of function approximation, and under the assumption of limited coverage of ...
Lire la suite >Offline Reinforcement Learning methods seek to learn a policy from logged transitions of an environment, without any interaction. In the presence of function approximation, and under the assumption of limited coverage of the state-action space of the environment, it is necessary to enforce the policy to visit state-action pairs close to the support of logged transitions. In this work, we propose an iterative procedure to learn a pseudometric (closely related to bisimulation metrics) from logged transitions, and use it to define this notion of closeness. We show its convergence and extend it to the function approximation setting. We then use this pseudometric to define a new lookup based bonus in an actor-critic algorithm: PLOFF. This bonus encourages the actor to stay close, in terms of the defined pseudometric, to the support of logged transitions. Finally, we evaluate the method on hand manipulation and locomotion tasks.Lire moins >
Lire la suite >Offline Reinforcement Learning methods seek to learn a policy from logged transitions of an environment, without any interaction. In the presence of function approximation, and under the assumption of limited coverage of the state-action space of the environment, it is necessary to enforce the policy to visit state-action pairs close to the support of logged transitions. In this work, we propose an iterative procedure to learn a pseudometric (closely related to bisimulation metrics) from logged transitions, and use it to define this notion of closeness. We show its convergence and extend it to the function approximation setting. We then use this pseudometric to define a new lookup based bonus in an actor-critic algorithm: PLOFF. This bonus encourages the actor to stay close, in terms of the defined pseudometric, to the support of logged transitions. Finally, we evaluate the method on hand manipulation and locomotion tasks.Lire moins >
Langue :
Anglais
Comité de lecture :
Oui
Audience :
Internationale
Vulgarisation :
Non
Collections :
Source :
Fichiers
- http://arxiv.org/pdf/2103.01948
- Accès libre
- Accéder au document
- 2103.01948
- Accès libre
- Accéder au document