Reinforcement Learning of POMDPs using ...
Type de document :
Communication dans un congrès avec actes
Titre :
Reinforcement Learning of POMDPs using Spectral Methods
Auteur(s) :
Azizzadenesheli, Kamyar [Auteur]
University of California [Irvine] [UC Irvine]
Lazaric, Alessandro [Auteur]
Sequential Learning [SEQUEL]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Anandkumar, Animashree [Auteur]
University of California [Irvine] [UC Irvine]
University of California [Irvine] [UC Irvine]
Lazaric, Alessandro [Auteur]
Sequential Learning [SEQUEL]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Anandkumar, Animashree [Auteur]
University of California [Irvine] [UC Irvine]
Titre de la manifestation scientifique :
Proceedings of the 29th Annual Conference on Learning Theory (COLT2016)
Ville :
New York City
Pays :
Etats-Unis d'Amérique
Date de début de la manifestation scientifique :
2016-06
Date de publication :
2016
Mot(s)-clé(s) en anglais :
Spectral Methods
Method of Moments
Partially Observable Markov Decision Pro-cess
Latent Variable Model
Upper Confidence Reinforcement Learning
Method of Moments
Partially Observable Markov Decision Pro-cess
Latent Variable Model
Upper Confidence Reinforcement Learning
Discipline(s) HAL :
Statistiques [stat]/Machine Learning [stat.ML]
Résumé en anglais : [en]
We propose a new reinforcement learning algorithm for partially observable Markov decision processes (POMDP) based on spectral decomposition methods. While spectral methods have been previously employed for consistent ...
Lire la suite >We propose a new reinforcement learning algorithm for partially observable Markov decision processes (POMDP) based on spectral decomposition methods. While spectral methods have been previously employed for consistent learning of (passive) latent variable models such as hidden Markov models, POMDPs are more challenging since the learner interacts with the environment and possibly changes the future observations in the process. We devise a learning algorithm running through episodes, in each episode we employ spectral techniques to learn the POMDP parameters from a trajectory generated by a fixed policy. At the end of the episode, an optimization oracle returns the optimal memoryless planning policy which maximizes the expected reward based on the estimated POMDP model. We prove an order-optimal regret bound w.r.t. the optimal memoryless policy and efficient scaling with respect to the dimensionality of observation and action spaces.Lire moins >
Lire la suite >We propose a new reinforcement learning algorithm for partially observable Markov decision processes (POMDP) based on spectral decomposition methods. While spectral methods have been previously employed for consistent learning of (passive) latent variable models such as hidden Markov models, POMDPs are more challenging since the learner interacts with the environment and possibly changes the future observations in the process. We devise a learning algorithm running through episodes, in each episode we employ spectral techniques to learn the POMDP parameters from a trajectory generated by a fixed policy. At the end of the episode, an optimization oracle returns the optimal memoryless planning policy which maximizes the expected reward based on the estimated POMDP model. We prove an order-optimal regret bound w.r.t. the optimal memoryless policy and efficient scaling with respect to the dimensionality of observation and action spaces.Lire moins >
Langue :
Anglais
Comité de lecture :
Oui
Audience :
Internationale
Vulgarisation :
Non
Collections :
Source :
Fichiers
- https://hal.inria.fr/hal-01322207/document
- Accès libre
- Accéder au document
- https://hal.inria.fr/hal-01322207/document
- Accès libre
- Accéder au document
- https://hal.inria.fr/hal-01322207/document
- Accès libre
- Accéder au document
- document
- Accès libre
- Accéder au document
- master.pdf
- Accès libre
- Accéder au document
- document
- Accès libre
- Accéder au document
- master.pdf
- Accès libre
- Accéder au document