A kernel-based approach to non-stationary ...
Type de document :
Communication dans un congrès avec actes
Titre :
A kernel-based approach to non-stationary reinforcement learning in metric spaces
Auteur(s) :
Domingues, Omar [Auteur]
Scool [Scool]
Ménard, Pierre [Auteur]
Otto-von-Guericke-Universität Magdeburg = Otto-von-Guericke University [Magdeburg] [OVGU]
Pirotta, Matteo [Auteur]
Facebook AI Research [Paris] [FAIR]
Kaufmann, Emilie [Auteur]
Scool [Scool]
Valko, Michal [Auteur]
DeepMind [Paris]
Scool [Scool]
Ménard, Pierre [Auteur]
Otto-von-Guericke-Universität Magdeburg = Otto-von-Guericke University [Magdeburg] [OVGU]
Pirotta, Matteo [Auteur]
Facebook AI Research [Paris] [FAIR]
Kaufmann, Emilie [Auteur]

Scool [Scool]
Valko, Michal [Auteur]

DeepMind [Paris]
Titre de la manifestation scientifique :
International Conference on Artificial Intelligence and Statistics
Ville :
San Diego / Virtual
Pays :
Etats-Unis d'Amérique
Date de début de la manifestation scientifique :
2021-04-13
Discipline(s) HAL :
Statistiques [stat]/Machine Learning [stat.ML]
Résumé en anglais : [en]
In this work, we propose KeRNS: an algorithm for episodic reinforcement learning in nonstationary Markov Decision Processes (MDPs) whose state-action set is endowed with a metric. Using a non-parametric model of the MDP ...
Lire la suite >In this work, we propose KeRNS: an algorithm for episodic reinforcement learning in nonstationary Markov Decision Processes (MDPs) whose state-action set is endowed with a metric. Using a non-parametric model of the MDP built with time-dependent kernels, we prove a regret bound that scales with the covering dimension of the state-action space and the total variation of the MDP with time, which quantifies its level of non-stationarity. Our method generalizes previous approaches based on sliding windows and exponential discounting used to handle changing environments. We further propose a practical implementation of KeRNS, we analyze its regret and validate it experimentally.Lire moins >
Lire la suite >In this work, we propose KeRNS: an algorithm for episodic reinforcement learning in nonstationary Markov Decision Processes (MDPs) whose state-action set is endowed with a metric. Using a non-parametric model of the MDP built with time-dependent kernels, we prove a regret bound that scales with the covering dimension of the state-action space and the total variation of the MDP with time, which quantifies its level of non-stationarity. Our method generalizes previous approaches based on sliding windows and exponential discounting used to handle changing environments. We further propose a practical implementation of KeRNS, we analyze its regret and validate it experimentally.Lire moins >
Langue :
Anglais
Comité de lecture :
Oui
Audience :
Internationale
Vulgarisation :
Non
Projet Européen :
Collections :
Source :
Fichiers
- https://hal.inria.fr/hal-03289026/document
- Accès libre
- Accéder au document
- https://hal.inria.fr/hal-03289026/document
- Accès libre
- Accéder au document
- https://hal.inria.fr/hal-03289026/document
- Accès libre
- Accéder au document
- document
- Accès libre
- Accéder au document
- domingues2021kernel-based-non-stationary.pdf
- Accès libre
- Accéder au document
- document
- Accès libre
- Accéder au document
- domingues2021kernel-based-non-stationary.pdf
- Accès libre
- Accéder au document