Compatible Reward Inverse Reinforcement Learning
Type de document :
Communication dans un congrès avec actes
Titre :
Compatible Reward Inverse Reinforcement Learning
Auteur(s) :
Metelli, Alberto [Auteur]
Department of Electronics, Information, and Bioengineering [Milano] [DEIB]
Pirotta, Matteo [Auteur]
Sequential Learning [SEQUEL]
Restelli, Marcello [Auteur]
Department of Electronics, Information, and Bioengineering [Milano] [DEIB]
Department of Electronics, Information, and Bioengineering [Milano] [DEIB]
Pirotta, Matteo [Auteur]
Sequential Learning [SEQUEL]
Restelli, Marcello [Auteur]
Department of Electronics, Information, and Bioengineering [Milano] [DEIB]
Titre de la manifestation scientifique :
The Thirty-first Annual Conference on Neural Information Processing Systems - NIPS 2017
Ville :
Long Beach
Pays :
Etats-Unis d'Amérique
Date de début de la manifestation scientifique :
2017-12-04
Discipline(s) HAL :
Statistiques [stat]/Machine Learning [stat.ML]
Résumé en anglais : [en]
Inverse Reinforcement Learning (IRL) is an effective approach to recover a reward function that explains the behavior of an expert by observing a set of demonstrations. This paper is about a novel model-free IRL approach ...
Lire la suite >Inverse Reinforcement Learning (IRL) is an effective approach to recover a reward function that explains the behavior of an expert by observing a set of demonstrations. This paper is about a novel model-free IRL approach that, differently from most of the existing IRL algorithms, does not require to specify a function space where to search for the expert's reward function. Leveraging on the fact that the policy gradient needs to be zero for any optimal policy, the algorithm generates a set of basis functions that span the subspace of reward functions that make the policy gradient vanish. Within this subspace, using a second-order criterion, we search for the reward function that penalizes the most a deviation from the expert's policy. After introducing our approach for finite domains, we extend it to continuous ones. The proposed approach is empirically compared to other IRL methods both in the (finite) Taxi domain and in the (continuous) Linear Quadratic Gaussian (LQG) and Car on the Hill environments.Lire moins >
Lire la suite >Inverse Reinforcement Learning (IRL) is an effective approach to recover a reward function that explains the behavior of an expert by observing a set of demonstrations. This paper is about a novel model-free IRL approach that, differently from most of the existing IRL algorithms, does not require to specify a function space where to search for the expert's reward function. Leveraging on the fact that the policy gradient needs to be zero for any optimal policy, the algorithm generates a set of basis functions that span the subspace of reward functions that make the policy gradient vanish. Within this subspace, using a second-order criterion, we search for the reward function that penalizes the most a deviation from the expert's policy. After introducing our approach for finite domains, we extend it to continuous ones. The proposed approach is empirically compared to other IRL methods both in the (finite) Taxi domain and in the (continuous) Linear Quadratic Gaussian (LQG) and Car on the Hill environments.Lire moins >
Langue :
Anglais
Comité de lecture :
Oui
Audience :
Internationale
Vulgarisation :
Non
Collections :
Source :
Fichiers
- https://hal.inria.fr/hal-01653328/document
- Accès libre
- Accéder au document
- https://hal.inria.fr/hal-01653328/document
- Accès libre
- Accéder au document
- https://hal.inria.fr/hal-01653328/document
- Accès libre
- Accéder au document
- document
- Accès libre
- Accéder au document
- 6800-compatible-reward-inverse-reinforcement-learning.pdf
- Accès libre
- Accéder au document
- document
- Accès libre
- Accéder au document
- 6800-compatible-reward-inverse-reinforcement-learning.pdf
- Accès libre
- Accéder au document