Optimal Regret Bounds for Selecting the ...
Type de document :
Communication dans un congrès avec actes
Titre :
Optimal Regret Bounds for Selecting the State Representation in Reinforcement Learning
Auteur(s) :
Maillard, Odalric Ambrym [Auteur]
Montanuniversität Leoben [MUL]
Nguyen, Phuong [Auteur]
National ICT Australia [Sydney] [NICTA]
Ortner, Ronald [Auteur]
Montanuniversität Leoben [MUL]
Ryabko, Daniil [Auteur]
Sequential Learning [SEQUEL]

Montanuniversität Leoben [MUL]
Nguyen, Phuong [Auteur]
National ICT Australia [Sydney] [NICTA]
Ortner, Ronald [Auteur]
Montanuniversität Leoben [MUL]
Ryabko, Daniil [Auteur]
Sequential Learning [SEQUEL]
Titre de la manifestation scientifique :
ICML - 30th International Conference on Machine Learning
Ville :
Atlanta, USA
Pays :
Etats-Unis d'Amérique
Date de début de la manifestation scientifique :
2013
Titre de la revue :
JMLR W&CP
Date de publication :
2013
Discipline(s) HAL :
Informatique [cs]/Intelligence artificielle [cs.AI]
Résumé en anglais : [en]
We consider an agent interacting with an environment in a single stream of actions, observations, and rewards, with no reset. This process is not assumed to be a Markov Decision Process (MDP). Rather, the agent has several ...
Lire la suite >We consider an agent interacting with an environment in a single stream of actions, observations, and rewards, with no reset. This process is not assumed to be a Markov Decision Process (MDP). Rather, the agent has several representations (mapping histories of past interactions to a discrete state space) of the environment with unknown dynamics, only some of which result in an MDP. The goal is to minimize the average regret criterion against an agent who knows an MDP representation giving the highest optimal reward, and acts optimally in it. Recent regret bounds for this setting are of order $O(T^{2/3})$ with an additive term constant yet exponential in some characteristics of the optimal MDP. We propose an algorithm whose regret after $T$ time steps is $O(\sqrt{T})$, with all constants reasonably small. This is optimal in $T$ since $O(\sqrt{T})$ is the optimal regret in the setting of learning in a (single discrete) MDP.Lire moins >
Lire la suite >We consider an agent interacting with an environment in a single stream of actions, observations, and rewards, with no reset. This process is not assumed to be a Markov Decision Process (MDP). Rather, the agent has several representations (mapping histories of past interactions to a discrete state space) of the environment with unknown dynamics, only some of which result in an MDP. The goal is to minimize the average regret criterion against an agent who knows an MDP representation giving the highest optimal reward, and acts optimally in it. Recent regret bounds for this setting are of order $O(T^{2/3})$ with an additive term constant yet exponential in some characteristics of the optimal MDP. We propose an algorithm whose regret after $T$ time steps is $O(\sqrt{T})$, with all constants reasonably small. This is optimal in $T$ since $O(\sqrt{T})$ is the optimal regret in the setting of learning in a (single discrete) MDP.Lire moins >
Langue :
Anglais
Comité de lecture :
Oui
Audience :
Internationale
Vulgarisation :
Non
Collections :
Source :
Fichiers
- https://hal.inria.fr/hal-00778586/document
- Accès libre
- Accéder au document
- https://hal.inria.fr/hal-00778586/document
- Accès libre
- Accéder au document
- https://hal.inria.fr/hal-00778586/document
- Accès libre
- Accéder au document
- document
- Accès libre
- Accéder au document
- icml1_iblb_cr-corrected.pdf
- Accès libre
- Accéder au document
- document
- Accès libre
- Accéder au document
- icml1_iblb_cr-corrected.pdf
- Accès libre
- Accéder au document