Mean Field Games Flock! The Reinforcement ...
Type de document :
Communication dans un congrès avec actes
Titre :
Mean Field Games Flock! The Reinforcement Learning Way
Auteur(s) :
Perrin, Sarah [Auteur]
Scool [Scool]
Laurière, Mathieu [Auteur]
Pérolat, Julien [Auteur]
Geist, Matthieu [Auteur]
Élie, Romuald [Auteur]
Pietquin, Olivier [Auteur]
Scool [Scool]
Laurière, Mathieu [Auteur]
Pérolat, Julien [Auteur]
Geist, Matthieu [Auteur]
Élie, Romuald [Auteur]
Pietquin, Olivier [Auteur]
Titre de la manifestation scientifique :
IJCAI
Ville :
Montreal
Pays :
Canada
Date de début de la manifestation scientifique :
2021-08-23
Titre de la revue :
IJCAI
Proc. IJCAI
Proc. IJCAI
Date de publication :
2021-08-26
Discipline(s) HAL :
Informatique [cs]/Intelligence artificielle [cs.AI]
Informatique [cs]/Informatique et théorie des jeux [cs.GT]
Informatique [cs]/Système multi-agents [cs.MA]
Informatique [cs]/Réseau de neurones [cs.NE]
Informatique [cs]/Informatique et théorie des jeux [cs.GT]
Informatique [cs]/Système multi-agents [cs.MA]
Informatique [cs]/Réseau de neurones [cs.NE]
Résumé en anglais : [en]
We present a method enabling a large number of agents to learn how to flock, which is a natural behavior observed in large populations of animals. This problem has drawn a lot of interest but requires many structural ...
Lire la suite >We present a method enabling a large number of agents to learn how to flock, which is a natural behavior observed in large populations of animals. This problem has drawn a lot of interest but requires many structural assumptions and is tractable only in small dimensions. We phrase this problem as a Mean Field Game (MFG), where each individual chooses its acceleration depending on the population behavior. Combining Deep Reinforcement Learning (RL) and Normalizing Flows (NF), we obtain a tractable solution requiring only very weak assumptions. Our algorithm finds a Nash Equilibrium and the agents adapt their velocity to match the neighboring flock's average one. We use Fictitious Play and alternate: (1) computing an approximate best response with Deep RL, and (2) estimating the next population distribution with NF. We show numerically that our algorithm learn multi-group or high-dimensional flocking with obstacles.Lire moins >
Lire la suite >We present a method enabling a large number of agents to learn how to flock, which is a natural behavior observed in large populations of animals. This problem has drawn a lot of interest but requires many structural assumptions and is tractable only in small dimensions. We phrase this problem as a Mean Field Game (MFG), where each individual chooses its acceleration depending on the population behavior. Combining Deep Reinforcement Learning (RL) and Normalizing Flows (NF), we obtain a tractable solution requiring only very weak assumptions. Our algorithm finds a Nash Equilibrium and the agents adapt their velocity to match the neighboring flock's average one. We use Fictitious Play and alternate: (1) computing an approximate best response with Deep RL, and (2) estimating the next population distribution with NF. We show numerically that our algorithm learn multi-group or high-dimensional flocking with obstacles.Lire moins >
Langue :
Anglais
Comité de lecture :
Oui
Audience :
Internationale
Vulgarisation :
Non
Collections :
Source :
Fichiers
- http://arxiv.org/pdf/2105.07933
- Accès libre
- Accéder au document
- 2105.07933
- Accès libre
- Accéder au document