Regret Bounds for Reinforcement Learning ...
Type de document :
Communication dans un congrès avec actes
Titre :
Regret Bounds for Reinforcement Learning with Policy Advice
Auteur(s) :
Gheshlaghi Azar, Mohammad [Auteur]
Computer Science Department - Carnegie Mellon University
Lazaric, Alessandro [Auteur]
Laboratoire d'Informatique Fondamentale de Lille [LIFL]
Sequential Learning [SEQUEL]
Brunskill, Emma [Auteur]
Computer Science Department - Carnegie Mellon University
Computer Science Department - Carnegie Mellon University
Lazaric, Alessandro [Auteur]

Laboratoire d'Informatique Fondamentale de Lille [LIFL]
Sequential Learning [SEQUEL]
Brunskill, Emma [Auteur]
Computer Science Department - Carnegie Mellon University
Titre de la manifestation scientifique :
ECML/PKDD - European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases
Ville :
Prague
Pays :
République tchèque
Date de début de la manifestation scientifique :
2013-09
Date de publication :
2013-09
Discipline(s) HAL :
Informatique [cs]/Apprentissage [cs.LG]
Résumé en anglais : [en]
In some reinforcement learning problems an agent may be provided with a set of input policies, perhaps learned from prior experience or provided by advisors. We present a reinforcement learning with policy advice (RLPA) ...
Lire la suite >In some reinforcement learning problems an agent may be provided with a set of input policies, perhaps learned from prior experience or provided by advisors. We present a reinforcement learning with policy advice (RLPA) algorithm which leverages this input set and learns to use the best policy in the set for the reinforcement learning task at hand. We prove that RLPA has a sub-linear regret of $\widetilde O(\sqrt{T})$ relative to the best input policy, and that both this regret and its computational complexity are independent of the size of the state and action space. Our empirical simulations support our theoretical analysis. This suggests RLPA may offer significant advantages in large domains where some prior good policies are provided.Lire moins >
Lire la suite >In some reinforcement learning problems an agent may be provided with a set of input policies, perhaps learned from prior experience or provided by advisors. We present a reinforcement learning with policy advice (RLPA) algorithm which leverages this input set and learns to use the best policy in the set for the reinforcement learning task at hand. We prove that RLPA has a sub-linear regret of $\widetilde O(\sqrt{T})$ relative to the best input policy, and that both this regret and its computational complexity are independent of the size of the state and action space. Our empirical simulations support our theoretical analysis. This suggests RLPA may offer significant advantages in large domains where some prior good policies are provided.Lire moins >
Langue :
Anglais
Comité de lecture :
Oui
Audience :
Internationale
Vulgarisation :
Non
Projet Européen :
Collections :
Source :
Fichiers
- https://hal.inria.fr/hal-00924021/document
- Accès libre
- Accéder au document
- https://hal.inria.fr/hal-00924021/document
- Accès libre
- Accéder au document
- https://hal.inria.fr/hal-00924021/document
- Accès libre
- Accéder au document
- document
- Accès libre
- Accéder au document
- RLPAcr.pdf
- Accès libre
- Accéder au document
- document
- Accès libre
- Accéder au document
- RLPAcr.pdf
- Accès libre
- Accéder au document