Learning in Stackelberg Games with Application ...
Document type :
Autre communication scientifique (congrès sans actes - poster - séminaire...): Communication dans un congrès avec actes
Title :
Learning in Stackelberg Games with Application to Strategic Bidding in the Electricity Market
Author(s) :
Morri, Francesco [Auteur correspondant]
Integrated Optimization with Complex Structure [INOCS]
Inria Lille - Nord Europe
Le Cadre, Hélène [Auteur]
Integrated Optimization with Complex Structure [INOCS]
Inria Lille - Nord Europe
Brotcorne, Luce [Auteur]
Integrated Optimization with Complex Structure [INOCS]
Inria Lille - Nord Europe
Gruet, Pierre [Auteur]
EDF R&D [EDF R&D]
Integrated Optimization with Complex Structure [INOCS]
Inria Lille - Nord Europe
Le Cadre, Hélène [Auteur]
Integrated Optimization with Complex Structure [INOCS]
Inria Lille - Nord Europe
Brotcorne, Luce [Auteur]
Integrated Optimization with Complex Structure [INOCS]
Inria Lille - Nord Europe
Gruet, Pierre [Auteur]
EDF R&D [EDF R&D]
Conference title :
EEM24
City :
Istanbul
Country :
Turquie
Start date of the conference :
2024-06-10
English keyword(s) :
Energy Market
Multi Agent Reinforcement Learning
Market Simulation
Multi Agent Reinforcement Learning
Market Simulation
HAL domain(s) :
Informatique [cs]/Apprentissage [cs.LG]
English abstract : [en]
We formulate a two-stage electricity market involving conventional and renewable producers strategically bidding in the day-ahead market, to maximize their profits while anticipating the market clearing performed by an ...
Show more >We formulate a two-stage electricity market involving conventional and renewable producers strategically bidding in the day-ahead market, to maximize their profits while anticipating the market clearing performed by an Independent System Operator (ISO), as a multi-leader single follower Stackelberg game. In this game, producers are interpreted as leaders, while the ISO acts as a follower.To compute an equilibrium, the classical approach is to cast the Stackelberg game as a Generalized Nash Game (GNG), replacing the ISO's optimization problem by its KKT constraints. To solve this reformulated problem, we can either rely on the Gauss-Seidel Best-Response method (GS-BR), or, on the Alternating Direction Method of Multipliers (ADMM). However, both approaches are implemented in a centralized setting since they require the existence of a coordinator which keeps track of the history of agents' strategies and sequential updates, or, is responsible for the Lagrange multiplier updates following the augmented Lagrangian.To allow the agents to selfishly optimize their utility functions in a decentralized setting, we introduce a variant of an actor-critic Multi-Agent deep Reinforcement Learning (MARL) algorithm with provable convergence.Our algorithm is innovative in that it allows different levels of coordination among the actors and the critic, thus capturing different information structures of the Stackelberg game. We conclude this work by comparing GS-BR and ADMM, both used as benchmark, to the MARL, on a dataset from the French electricity market, relying on metrics such as the efficiency loss and the accuracy of the solution.Show less >
Show more >We formulate a two-stage electricity market involving conventional and renewable producers strategically bidding in the day-ahead market, to maximize their profits while anticipating the market clearing performed by an Independent System Operator (ISO), as a multi-leader single follower Stackelberg game. In this game, producers are interpreted as leaders, while the ISO acts as a follower.To compute an equilibrium, the classical approach is to cast the Stackelberg game as a Generalized Nash Game (GNG), replacing the ISO's optimization problem by its KKT constraints. To solve this reformulated problem, we can either rely on the Gauss-Seidel Best-Response method (GS-BR), or, on the Alternating Direction Method of Multipliers (ADMM). However, both approaches are implemented in a centralized setting since they require the existence of a coordinator which keeps track of the history of agents' strategies and sequential updates, or, is responsible for the Lagrange multiplier updates following the augmented Lagrangian.To allow the agents to selfishly optimize their utility functions in a decentralized setting, we introduce a variant of an actor-critic Multi-Agent deep Reinforcement Learning (MARL) algorithm with provable convergence.Our algorithm is innovative in that it allows different levels of coordination among the actors and the critic, thus capturing different information structures of the Stackelberg game. We conclude this work by comparing GS-BR and ADMM, both used as benchmark, to the MARL, on a dataset from the French electricity market, relying on metrics such as the efficiency loss and the accuracy of the solution.Show less >
Language :
Anglais
Peer reviewed article :
Oui
Audience :
Internationale
Popular science :
Non
Collections :
Source :
Files
- document
- Open access
- Access the document
- EEM24_Paper_with_names.pdf
- Open access
- Access the document