• English
    • français
  • Help
  •  | 
  • Contact
  •  | 
  • About
  •  | 
  • Login
  • HAL portal
  •  | 
  • Pages Pro
  • EN
  •  / 
  • FR
View Item 
  •   LillOA Home
  • Liste des unités
  • Centre de Recherche en Informatique, Signal et Automatique de Lille (CRIStAL) - UMR 9189
  • View Item
  •   LillOA Home
  • Liste des unités
  • Centre de Recherche en Informatique, Signal et Automatique de Lille (CRIStAL) - UMR 9189
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Offline Reinforcement Learning as Anti-Exploration
  • BibTeX
  • CSV
  • Excel
  • RIS

Document type :
Communication dans un congrès avec actes
Title :
Offline Reinforcement Learning as Anti-Exploration
Author(s) :
Rezaeifar, Shideh [Auteur]
Université de Genève = University of Geneva [UNIGE]
Dadashi, Robert [Auteur]
Google Research [Paris]
Vieillard, Nino [Auteur]
Institut Élie Cartan de Lorraine [IECL]
Biology, genetics and statistics [BIGS]
Google Research [Paris]
Hussenot, Léonard [Auteur]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Scool [Scool]
Google Research [Paris]
Bachem, Olivier [Auteur]
Google Research [Zurich]
Pietquin, Olivier [Auteur] refId
Google Research [Paris]
Geist, Matthieu [Auteur]
Google Research [Paris]
Conference title :
AAAI 2022 - 36th AAAI Conference on Artificial Intelligence
Conference organizers(s) :
Association for the Advancement of Artificial Intelligence (AAAI)
City :
Vancouver
Country :
Canada
Start date of the conference :
2022-02-22
HAL domain(s) :
Informatique [cs]/Apprentissage [cs.LG]
English abstract : [en]
Offline Reinforcement Learning (RL) aims at learning an optimal control from a fixed dataset, without interactions with the system. An agent in this setting should avoid selecting actions whose consequences cannot be ...
Show more >
Offline Reinforcement Learning (RL) aims at learning an optimal control from a fixed dataset, without interactions with the system. An agent in this setting should avoid selecting actions whose consequences cannot be predicted from the data. This is the converse of exploration in RL, which favors such actions. We thus take inspiration from the literature on bonus-based exploration to design a new offline RL agent. The core idea is to subtract a prediction-based exploration bonus from the reward, instead of adding it for exploration. This allows the policy to stay close to the support of the dataset. We connect this approach to a more common regularization of the learned policy towards the data. Instantiated with a bonus based on the prediction error of a variational autoencoder, we show that our agent is competitive with the state of the art on a set of continuous control locomotion and manipulation tasks.Show less >
Language :
Anglais
Peer reviewed article :
Oui
Audience :
Internationale
Popular science :
Non
Collections :
  • Centre de Recherche en Informatique, Signal et Automatique de Lille (CRIStAL) - UMR 9189
Source :
Harvested from HAL
Files
Thumbnail
  • http://arxiv.org/pdf/2106.06431
  • Open access
  • Access the document
Université de Lille

Mentions légales
Université de Lille © 2017