AdaStop: sequential testing for efficient ...
Document type :
Rapport de recherche: Autre communication scientifique (congrès sans actes - poster - séminaire...)
Title :
AdaStop: sequential testing for efficient and reliable comparisons of Deep RL Agents
Author(s) :
Mathieu, Timothée [Auteur]
Scool [Scool]
Université de Lille
Centre National de la Recherche Scientifique [CNRS]
Centrale Lille
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Della Vecchia, Riccardo [Auteur]
Scool [Scool]
Université de Lille
Centre National de la Recherche Scientifique [CNRS]
Centrale Lille
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Shilova, Alena [Auteur]
Scool [Scool]
Université de Lille
Centre National de la Recherche Scientifique [CNRS]
Centrale Lille
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Centa de Medeiros, Matheus [Auteur]
Scool [Scool]
Université de Lille
Centre National de la Recherche Scientifique [CNRS]
Centrale Lille
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Kohler, Hector [Auteur]
Scool [Scool]
Université de Lille
Centre National de la Recherche Scientifique [CNRS]
Centrale Lille
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Maillard, Odalric Ambrym [Auteur]
Scool [Scool]
Université de Lille
Centre National de la Recherche Scientifique [CNRS]
Centrale Lille
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Preux, Philippe [Auteur]
Scool [Scool]
Université de Lille
Centre National de la Recherche Scientifique [CNRS]
Centrale Lille
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Scool [Scool]
Université de Lille
Centre National de la Recherche Scientifique [CNRS]
Centrale Lille
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Della Vecchia, Riccardo [Auteur]
Scool [Scool]
Université de Lille
Centre National de la Recherche Scientifique [CNRS]
Centrale Lille
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Shilova, Alena [Auteur]
Scool [Scool]
Université de Lille
Centre National de la Recherche Scientifique [CNRS]
Centrale Lille
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Centa de Medeiros, Matheus [Auteur]
Scool [Scool]
Université de Lille
Centre National de la Recherche Scientifique [CNRS]
Centrale Lille
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Kohler, Hector [Auteur]
Scool [Scool]
Université de Lille
Centre National de la Recherche Scientifique [CNRS]
Centrale Lille
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Maillard, Odalric Ambrym [Auteur]

Scool [Scool]
Université de Lille
Centre National de la Recherche Scientifique [CNRS]
Centrale Lille
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Preux, Philippe [Auteur]

Scool [Scool]
Université de Lille
Centre National de la Recherche Scientifique [CNRS]
Centrale Lille
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Institution :
Inria Lille Nord Europe - Laboratoire CRIStAL - Université de Lille
Publication date :
2023-06
English keyword(s) :
Reproducibility
Reinforcement Learning
Sequential Tests
Reinforcement Learning
Sequential Tests
HAL domain(s) :
Statistiques [stat]/Machine Learning [stat.ML]
English abstract : [en]
The reproducibility of many experimental results in Deep Reinforcement Learning (RL) is under question. To solve this reproducibility crisis, we propose a theoretically sound methodology to compare multiple Deep RL algorithms. ...
Show more >The reproducibility of many experimental results in Deep Reinforcement Learning (RL) is under question. To solve this reproducibility crisis, we propose a theoretically sound methodology to compare multiple Deep RL algorithms. The performance of one execution of a Deep RL algorithm is random so that independent executions are needed to assess it precisely. When comparing several RL algorithms, a major question is how many executions must be made and how can we assure that the results of such a comparison is theoretically sound. Researchers in Deep RL often use less than 5 independent executions to compare algorithms: we claim that this is not enough in general. Moreover, when comparing several algorithms at once, the error of each comparison accumulates and must be taken into account with a multiple tests procedure to preserve low error guarantees. To address this problem in a statistically sound way, we introduce AdaStop, a new statistical test based on multiple group sequential tests. When comparing algorithms, AdaStop adapts the number of executions to stop as early as possible while ensuring that we have enough information to distinguish algorithms that perform better than the others in a statistical significant way. We prove both theoretically and empirically that AdaStop has a low probability of making an error (Family-Wise Error). Finally, we illustrate the effectiveness of AdaStop in multiple use-cases, including toy examples and difficult cases such as Mujoco environments.Show less >
Show more >The reproducibility of many experimental results in Deep Reinforcement Learning (RL) is under question. To solve this reproducibility crisis, we propose a theoretically sound methodology to compare multiple Deep RL algorithms. The performance of one execution of a Deep RL algorithm is random so that independent executions are needed to assess it precisely. When comparing several RL algorithms, a major question is how many executions must be made and how can we assure that the results of such a comparison is theoretically sound. Researchers in Deep RL often use less than 5 independent executions to compare algorithms: we claim that this is not enough in general. Moreover, when comparing several algorithms at once, the error of each comparison accumulates and must be taken into account with a multiple tests procedure to preserve low error guarantees. To address this problem in a statistically sound way, we introduce AdaStop, a new statistical test based on multiple group sequential tests. When comparing algorithms, AdaStop adapts the number of executions to stop as early as possible while ensuring that we have enough information to distinguish algorithms that perform better than the others in a statistical significant way. We prove both theoretically and empirically that AdaStop has a low probability of making an error (Family-Wise Error). Finally, we illustrate the effectiveness of AdaStop in multiple use-cases, including toy examples and difficult cases such as Mujoco environments.Show less >
Language :
Anglais
Collections :
Source :
Files
- document
- Open access
- Access the document
- RR-9513.pdf
- Open access
- Access the document
- document
- Open access
- Access the document
- RR-9513.pdf
- Open access
- Access the document