Show me the Way: Intrinsic Motivation from ...
Document type :
Communication dans un congrès avec actes
Title :
Show me the Way: Intrinsic Motivation from Demonstrations
Author(s) :
Hussenot, Léonard [Auteur]
Scool [Scool]
Google Research [Paris]
Dadashi, Robert [Auteur]
Google Research [Paris]
Geist, Matthieu [Auteur]
Google Research [Paris]
Pietquin, Olivier [Auteur]
Google Research [Paris]
Scool [Scool]
Google Research [Paris]
Dadashi, Robert [Auteur]
Google Research [Paris]
Geist, Matthieu [Auteur]
Google Research [Paris]
Pietquin, Olivier [Auteur]
Google Research [Paris]
Conference title :
AAMAS 2021 - 20th International Conference on Autonomous Agents and Multiagent Systems
City :
Virtual
Country :
Royaume-Uni
Start date of the conference :
2021-05-03
HAL domain(s) :
Informatique [cs]
English abstract : [en]
The study of exploration in the domain of decision making has a long history but remains actively debated. From the vast literature that addressed this topic for decades under various points of view (e.g., developmental ...
Show more >The study of exploration in the domain of decision making has a long history but remains actively debated. From the vast literature that addressed this topic for decades under various points of view (e.g., developmental psychology, experimental design, artificial intelligence), intrinsic motivation emerged as a concept that can practically be transferred to artificial agents. Especially, in the recent field of Deep Reinforcement Learning (RL), agents implement such a concept (mainly using a novelty argument) in the shape of an exploration bonus, added to the task reward, that encourages visiting the whole environment. This approach is supported by the large amount of theory on RL for which convergence to optimality assumes exhaustive exploration. Yet, Human Beings and mammals do not exhaustively explore the world and their motivation is not only based on novelty but also on various other factors (e.g., curiosity, fun, style, pleasure, safety, competition, etc.). They optimize for life-long learning and train to learn transferable skills in playgrounds without obvious goals. They also apply innate or learned priors to save time and stay safe. For these reasons, we propose to learn an exploration bonus from demonstrations that could transfer these motivations to an artificial agent with little assumptions about their rationale. Using an inverse RL approach, we show that complex exploration behaviors, reflecting different motivations, can be learnt and efficiently used by RL agents to solve tasks for which exhaustive exploration is prohibitive.Show less >
Show more >The study of exploration in the domain of decision making has a long history but remains actively debated. From the vast literature that addressed this topic for decades under various points of view (e.g., developmental psychology, experimental design, artificial intelligence), intrinsic motivation emerged as a concept that can practically be transferred to artificial agents. Especially, in the recent field of Deep Reinforcement Learning (RL), agents implement such a concept (mainly using a novelty argument) in the shape of an exploration bonus, added to the task reward, that encourages visiting the whole environment. This approach is supported by the large amount of theory on RL for which convergence to optimality assumes exhaustive exploration. Yet, Human Beings and mammals do not exhaustively explore the world and their motivation is not only based on novelty but also on various other factors (e.g., curiosity, fun, style, pleasure, safety, competition, etc.). They optimize for life-long learning and train to learn transferable skills in playgrounds without obvious goals. They also apply innate or learned priors to save time and stay safe. For these reasons, we propose to learn an exploration bonus from demonstrations that could transfer these motivations to an artificial agent with little assumptions about their rationale. Using an inverse RL approach, we show that complex exploration behaviors, reflecting different motivations, can be learnt and efficiently used by RL agents to solve tasks for which exhaustive exploration is prohibitive.Show less >
Language :
Anglais
Peer reviewed article :
Oui
Audience :
Internationale
Popular science :
Non
Comment :
AAMAS 2021
Collections :
Source :
Files
- https://hal.inria.fr/hal-03162139/document
- Open access
- Access the document
- http://arxiv.org/pdf/2006.12917
- Open access
- Access the document
- https://hal.inria.fr/hal-03162139/document
- Open access
- Access the document
- https://hal.inria.fr/hal-03162139/document
- Open access
- Access the document
- document
- Open access
- Access the document
- sample.pdf
- Open access
- Access the document
- 2006.12917
- Open access
- Access the document