MESSI: Maximum Entropy Semi-Supervised ...
Document type :
Autre communication scientifique (congrès sans actes - poster - séminaire...): Communication dans un congrès avec actes
Title :
MESSI: Maximum Entropy Semi-Supervised Inverse Reinforcement Learning
Author(s) :
Audiffren, Julien [Auteur]
Centre de Mathématiques et de Leurs Applications [CMLA]
Valko, Michal [Auteur]
Sequential Learning [SEQUEL]
Lazaric, Alessandro [Auteur]
Sequential Learning [SEQUEL]
Ghavamzadeh, Mohammad [Auteur]
Sequential Learning [SEQUEL]
Centre de Mathématiques et de Leurs Applications [CMLA]
Valko, Michal [Auteur]

Sequential Learning [SEQUEL]
Lazaric, Alessandro [Auteur]

Sequential Learning [SEQUEL]
Ghavamzadeh, Mohammad [Auteur]
Sequential Learning [SEQUEL]
Conference title :
NIPS Workshop on Novel Trends and Applications in Reinforcement Learning
City :
Montreal
Country :
Canada
Start date of the conference :
2014
HAL domain(s) :
Statistiques [stat]/Machine Learning [stat.ML]
English abstract : [en]
A popular approach to apprenticeship learning (AL) is to formulate it as an inverse reinforcement learning (IRL) problem. The MaxEnt-IRL algorithm successfully integrates the maximum entropy principle into IRL and unlike ...
Show more >A popular approach to apprenticeship learning (AL) is to formulate it as an inverse reinforcement learning (IRL) problem. The MaxEnt-IRL algorithm successfully integrates the maximum entropy principle into IRL and unlike its predecessors, it resolves the ambiguity arising from the fact that a possibly large number of policies could match the expert's behavior. In this paper, we study an AL setting in which in addition to the expert's trajectories, a number of unsupervised trajectories is available. We introduce MESSI, a novel algorithm that combines MaxEnt-IRL with principles coming from semi-supervised learning. In particular, MESSI integrates the unsupervised data into the MaxEnt-IRL framework using a pairwise penalty on trajectories. Empirical results in a highway driving and grid-world problems indicate that MESSI is able to take advantage of the unsupervised trajectories and improve the performance of MaxEnt-IRL.Show less >
Show more >A popular approach to apprenticeship learning (AL) is to formulate it as an inverse reinforcement learning (IRL) problem. The MaxEnt-IRL algorithm successfully integrates the maximum entropy principle into IRL and unlike its predecessors, it resolves the ambiguity arising from the fact that a possibly large number of policies could match the expert's behavior. In this paper, we study an AL setting in which in addition to the expert's trajectories, a number of unsupervised trajectories is available. We introduce MESSI, a novel algorithm that combines MaxEnt-IRL with principles coming from semi-supervised learning. In particular, MESSI integrates the unsupervised data into the MaxEnt-IRL framework using a pairwise penalty on trajectories. Empirical results in a highway driving and grid-world problems indicate that MESSI is able to take advantage of the unsupervised trajectories and improve the performance of MaxEnt-IRL.Show less >
Language :
Anglais
Peer reviewed article :
Oui
Audience :
Internationale
Popular science :
Non
Collections :
Source :
Files
- https://hal.inria.fr/hal-01177446/document
- Open access
- Access the document
- https://hal.inria.fr/hal-01177446/document
- Open access
- Access the document
- document
- Open access
- Access the document
- audiffren2014messi.pdf
- Open access
- Access the document
- document
- Open access
- Access the document
- audiffren2014messi.pdf
- Open access
- Access the document