Reinforcement Learning of POMDPs using ...
Document type :
Communication dans un congrès avec actes
Title :
Reinforcement Learning of POMDPs using Spectral Methods
Author(s) :
Azizzadenesheli, Kamyar [Auteur]
University of California [Irvine] [UC Irvine]
Lazaric, Alessandro [Auteur]
Sequential Learning [SEQUEL]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Anandkumar, Animashree [Auteur]
University of California [Irvine] [UC Irvine]
University of California [Irvine] [UC Irvine]
Lazaric, Alessandro [Auteur]
Sequential Learning [SEQUEL]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Anandkumar, Animashree [Auteur]
University of California [Irvine] [UC Irvine]
Conference title :
Proceedings of the 29th Annual Conference on Learning Theory (COLT2016)
City :
New York City
Country :
Etats-Unis d'Amérique
Start date of the conference :
2016-06
Publication date :
2016
English keyword(s) :
Spectral Methods
Method of Moments
Partially Observable Markov Decision Pro-cess
Latent Variable Model
Upper Confidence Reinforcement Learning
Method of Moments
Partially Observable Markov Decision Pro-cess
Latent Variable Model
Upper Confidence Reinforcement Learning
HAL domain(s) :
Statistiques [stat]/Machine Learning [stat.ML]
English abstract : [en]
We propose a new reinforcement learning algorithm for partially observable Markov decision processes (POMDP) based on spectral decomposition methods. While spectral methods have been previously employed for consistent ...
Show more >We propose a new reinforcement learning algorithm for partially observable Markov decision processes (POMDP) based on spectral decomposition methods. While spectral methods have been previously employed for consistent learning of (passive) latent variable models such as hidden Markov models, POMDPs are more challenging since the learner interacts with the environment and possibly changes the future observations in the process. We devise a learning algorithm running through episodes, in each episode we employ spectral techniques to learn the POMDP parameters from a trajectory generated by a fixed policy. At the end of the episode, an optimization oracle returns the optimal memoryless planning policy which maximizes the expected reward based on the estimated POMDP model. We prove an order-optimal regret bound w.r.t. the optimal memoryless policy and efficient scaling with respect to the dimensionality of observation and action spaces.Show less >
Show more >We propose a new reinforcement learning algorithm for partially observable Markov decision processes (POMDP) based on spectral decomposition methods. While spectral methods have been previously employed for consistent learning of (passive) latent variable models such as hidden Markov models, POMDPs are more challenging since the learner interacts with the environment and possibly changes the future observations in the process. We devise a learning algorithm running through episodes, in each episode we employ spectral techniques to learn the POMDP parameters from a trajectory generated by a fixed policy. At the end of the episode, an optimization oracle returns the optimal memoryless planning policy which maximizes the expected reward based on the estimated POMDP model. We prove an order-optimal regret bound w.r.t. the optimal memoryless policy and efficient scaling with respect to the dimensionality of observation and action spaces.Show less >
Language :
Anglais
Peer reviewed article :
Oui
Audience :
Internationale
Popular science :
Non
Collections :
Source :
Files
- https://hal.inria.fr/hal-01322207/document
- Open access
- Access the document
- https://hal.inria.fr/hal-01322207/document
- Open access
- Access the document
- https://hal.inria.fr/hal-01322207/document
- Open access
- Access the document
- document
- Open access
- Access the document
- master.pdf
- Open access
- Access the document
- document
- Open access
- Access the document
- master.pdf
- Open access
- Access the document