Mean Field Games Flock! The Reinforcement ...
Document type :
Communication dans un congrès avec actes
Title :
Mean Field Games Flock! The Reinforcement Learning Way
Author(s) :
Perrin, Sarah [Auteur]
Scool [Scool]
Laurière, Mathieu [Auteur]
Pérolat, Julien [Auteur]
Geist, Matthieu [Auteur]
Élie, Romuald [Auteur]
Pietquin, Olivier [Auteur]
Scool [Scool]
Laurière, Mathieu [Auteur]
Pérolat, Julien [Auteur]
Geist, Matthieu [Auteur]
Élie, Romuald [Auteur]
Pietquin, Olivier [Auteur]
Conference title :
IJCAI
City :
Montreal
Country :
Canada
Start date of the conference :
2021-08-23
Journal title :
IJCAI
Proc. IJCAI
Proc. IJCAI
Publication date :
2021-08-26
HAL domain(s) :
Informatique [cs]/Intelligence artificielle [cs.AI]
Informatique [cs]/Informatique et théorie des jeux [cs.GT]
Informatique [cs]/Système multi-agents [cs.MA]
Informatique [cs]/Réseau de neurones [cs.NE]
Informatique [cs]/Informatique et théorie des jeux [cs.GT]
Informatique [cs]/Système multi-agents [cs.MA]
Informatique [cs]/Réseau de neurones [cs.NE]
English abstract : [en]
We present a method enabling a large number of agents to learn how to flock, which is a natural behavior observed in large populations of animals. This problem has drawn a lot of interest but requires many structural ...
Show more >We present a method enabling a large number of agents to learn how to flock, which is a natural behavior observed in large populations of animals. This problem has drawn a lot of interest but requires many structural assumptions and is tractable only in small dimensions. We phrase this problem as a Mean Field Game (MFG), where each individual chooses its acceleration depending on the population behavior. Combining Deep Reinforcement Learning (RL) and Normalizing Flows (NF), we obtain a tractable solution requiring only very weak assumptions. Our algorithm finds a Nash Equilibrium and the agents adapt their velocity to match the neighboring flock's average one. We use Fictitious Play and alternate: (1) computing an approximate best response with Deep RL, and (2) estimating the next population distribution with NF. We show numerically that our algorithm learn multi-group or high-dimensional flocking with obstacles.Show less >
Show more >We present a method enabling a large number of agents to learn how to flock, which is a natural behavior observed in large populations of animals. This problem has drawn a lot of interest but requires many structural assumptions and is tractable only in small dimensions. We phrase this problem as a Mean Field Game (MFG), where each individual chooses its acceleration depending on the population behavior. Combining Deep Reinforcement Learning (RL) and Normalizing Flows (NF), we obtain a tractable solution requiring only very weak assumptions. Our algorithm finds a Nash Equilibrium and the agents adapt their velocity to match the neighboring flock's average one. We use Fictitious Play and alternate: (1) computing an approximate best response with Deep RL, and (2) estimating the next population distribution with NF. We show numerically that our algorithm learn multi-group or high-dimensional flocking with obstacles.Show less >
Language :
Anglais
Peer reviewed article :
Oui
Audience :
Internationale
Popular science :
Non
Collections :
Source :
Files
- http://arxiv.org/pdf/2105.07933
- Open access
- Access the document
- 2105.07933
- Open access
- Access the document