Score-based Inverse Reinforcement Learning
Document type :
Communication dans un congrès avec actes
Title :
Score-based Inverse Reinforcement Learning
Author(s) :
El Asri, Layla [Auteur]
Georgia Tech Lorraine [Metz]
Orange Labs [Issy les Moulineaux]
Piot, Bilal [Auteur]
Sequential Learning [SEQUEL]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Université de Lille, Sciences et Technologies
Geist, Matthieu [Auteur]
MAchine Learning and Interactive Systems [MALIS]
Laroche, Romain [Auteur]
Orange Labs [Issy les Moulineaux]
Pietquin, Olivier [Auteur]
Institut universitaire de France [IUF]
Sequential Learning [SEQUEL]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Université de Lille, Sciences et Technologies
Georgia Tech Lorraine [Metz]
Orange Labs [Issy les Moulineaux]
Piot, Bilal [Auteur]

Sequential Learning [SEQUEL]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Université de Lille, Sciences et Technologies
Geist, Matthieu [Auteur]
MAchine Learning and Interactive Systems [MALIS]
Laroche, Romain [Auteur]
Orange Labs [Issy les Moulineaux]
Pietquin, Olivier [Auteur]
Institut universitaire de France [IUF]
Sequential Learning [SEQUEL]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Université de Lille, Sciences et Technologies
Conference title :
International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2016)
City :
Singapore
Country :
Singapour
Start date of the conference :
2016-05-09
English keyword(s) :
Reinforcement Learning
Inverse Reinforcement Learning
Markov Decision Processes
Learning from Demonstration
Spoken Dialogue Systems
Inverse Reinforcement Learning
Markov Decision Processes
Learning from Demonstration
Spoken Dialogue Systems
HAL domain(s) :
Informatique [cs]/Apprentissage [cs.LG]
Informatique [cs]/Interface homme-machine [cs.HC]
Informatique [cs]/Interface homme-machine [cs.HC]
English abstract : [en]
This paper reports theoretical and empirical results obtained for the score-based Inverse Reinforcement Learning (IRL) algorithm. It relies on a non-standard setting for IRL consisting of learning a reward from a set of ...
Show more >This paper reports theoretical and empirical results obtained for the score-based Inverse Reinforcement Learning (IRL) algorithm. It relies on a non-standard setting for IRL consisting of learning a reward from a set of globally scored trajec-tories. This allows using any type of policy (optimal or not) to generate trajectories without prior knowledge during data collection. This way, any existing database (like logs of systems in use) can be scored a posteriori by an expert and used to learn a reward function. Thanks to this reward function, it is shown that a near-optimal policy can be computed. Being related to least-square regression, the algorithm (called SBIRL) comes with theoretical guarantees that are proven in this paper. SBIRL is compared to standard IRL algorithms on synthetic data showing that annotations do help under conditions on the quality of the trajectories. It is also shown to be suitable for real-world applications such as the optimisation of a spoken dialogue system.Show less >
Show more >This paper reports theoretical and empirical results obtained for the score-based Inverse Reinforcement Learning (IRL) algorithm. It relies on a non-standard setting for IRL consisting of learning a reward from a set of globally scored trajec-tories. This allows using any type of policy (optimal or not) to generate trajectories without prior knowledge during data collection. This way, any existing database (like logs of systems in use) can be scored a posteriori by an expert and used to learn a reward function. Thanks to this reward function, it is shown that a near-optimal policy can be computed. Being related to least-square regression, the algorithm (called SBIRL) comes with theoretical guarantees that are proven in this paper. SBIRL is compared to standard IRL algorithms on synthetic data showing that annotations do help under conditions on the quality of the trajectories. It is also shown to be suitable for real-world applications such as the optimisation of a spoken dialogue system.Show less >
Language :
Anglais
Peer reviewed article :
Oui
Audience :
Internationale
Popular science :
Non
Collections :
Source :
Files
- https://hal.inria.fr/hal-01406886/document
- Open access
- Access the document
- https://hal.inria.fr/hal-01406886/document
- Open access
- Access the document
- https://hal.inria.fr/hal-01406886/document
- Open access
- Access the document
- document
- Open access
- Access the document
- aamas-score-based.pdf
- Open access
- Access the document
- document
- Open access
- Access the document
- aamas-score-based.pdf
- Open access
- Access the document