AdaStop: sequential testing for efficient ...
Type de document :
Rapport de recherche: Autre communication scientifique (congrès sans actes - poster - séminaire...)
Titre :
AdaStop: sequential testing for efficient and reliable comparisons of Deep RL Agents
Auteur(s) :
Mathieu, Timothée [Auteur]
Scool [Scool]
Université de Lille
Centre National de la Recherche Scientifique [CNRS]
Centrale Lille
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Della Vecchia, Riccardo [Auteur]
Scool [Scool]
Université de Lille
Centre National de la Recherche Scientifique [CNRS]
Centrale Lille
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Shilova, Alena [Auteur]
Scool [Scool]
Université de Lille
Centre National de la Recherche Scientifique [CNRS]
Centrale Lille
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Centa de Medeiros, Matheus [Auteur]
Scool [Scool]
Université de Lille
Centre National de la Recherche Scientifique [CNRS]
Centrale Lille
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Kohler, Hector [Auteur]
Scool [Scool]
Université de Lille
Centre National de la Recherche Scientifique [CNRS]
Centrale Lille
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Maillard, Odalric Ambrym [Auteur]
Scool [Scool]
Université de Lille
Centre National de la Recherche Scientifique [CNRS]
Centrale Lille
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Preux, Philippe [Auteur]
Scool [Scool]
Université de Lille
Centre National de la Recherche Scientifique [CNRS]
Centrale Lille
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Scool [Scool]
Université de Lille
Centre National de la Recherche Scientifique [CNRS]
Centrale Lille
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Della Vecchia, Riccardo [Auteur]
Scool [Scool]
Université de Lille
Centre National de la Recherche Scientifique [CNRS]
Centrale Lille
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Shilova, Alena [Auteur]
Scool [Scool]
Université de Lille
Centre National de la Recherche Scientifique [CNRS]
Centrale Lille
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Centa de Medeiros, Matheus [Auteur]
Scool [Scool]
Université de Lille
Centre National de la Recherche Scientifique [CNRS]
Centrale Lille
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Kohler, Hector [Auteur]
Scool [Scool]
Université de Lille
Centre National de la Recherche Scientifique [CNRS]
Centrale Lille
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Maillard, Odalric Ambrym [Auteur]

Scool [Scool]
Université de Lille
Centre National de la Recherche Scientifique [CNRS]
Centrale Lille
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Preux, Philippe [Auteur]

Scool [Scool]
Université de Lille
Centre National de la Recherche Scientifique [CNRS]
Centrale Lille
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Institution :
Inria Lille Nord Europe - Laboratoire CRIStAL - Université de Lille
Date de publication :
2023-06
Mot(s)-clé(s) en anglais :
Reproducibility
Reinforcement Learning
Sequential Tests
Reinforcement Learning
Sequential Tests
Discipline(s) HAL :
Statistiques [stat]/Machine Learning [stat.ML]
Résumé en anglais : [en]
The reproducibility of many experimental results in Deep Reinforcement Learning (RL) is under question. To solve this reproducibility crisis, we propose a theoretically sound methodology to compare multiple Deep RL algorithms. ...
Lire la suite >The reproducibility of many experimental results in Deep Reinforcement Learning (RL) is under question. To solve this reproducibility crisis, we propose a theoretically sound methodology to compare multiple Deep RL algorithms. The performance of one execution of a Deep RL algorithm is random so that independent executions are needed to assess it precisely. When comparing several RL algorithms, a major question is how many executions must be made and how can we assure that the results of such a comparison is theoretically sound. Researchers in Deep RL often use less than 5 independent executions to compare algorithms: we claim that this is not enough in general. Moreover, when comparing several algorithms at once, the error of each comparison accumulates and must be taken into account with a multiple tests procedure to preserve low error guarantees. To address this problem in a statistically sound way, we introduce AdaStop, a new statistical test based on multiple group sequential tests. When comparing algorithms, AdaStop adapts the number of executions to stop as early as possible while ensuring that we have enough information to distinguish algorithms that perform better than the others in a statistical significant way. We prove both theoretically and empirically that AdaStop has a low probability of making an error (Family-Wise Error). Finally, we illustrate the effectiveness of AdaStop in multiple use-cases, including toy examples and difficult cases such as Mujoco environments.Lire moins >
Lire la suite >The reproducibility of many experimental results in Deep Reinforcement Learning (RL) is under question. To solve this reproducibility crisis, we propose a theoretically sound methodology to compare multiple Deep RL algorithms. The performance of one execution of a Deep RL algorithm is random so that independent executions are needed to assess it precisely. When comparing several RL algorithms, a major question is how many executions must be made and how can we assure that the results of such a comparison is theoretically sound. Researchers in Deep RL often use less than 5 independent executions to compare algorithms: we claim that this is not enough in general. Moreover, when comparing several algorithms at once, the error of each comparison accumulates and must be taken into account with a multiple tests procedure to preserve low error guarantees. To address this problem in a statistically sound way, we introduce AdaStop, a new statistical test based on multiple group sequential tests. When comparing algorithms, AdaStop adapts the number of executions to stop as early as possible while ensuring that we have enough information to distinguish algorithms that perform better than the others in a statistical significant way. We prove both theoretically and empirically that AdaStop has a low probability of making an error (Family-Wise Error). Finally, we illustrate the effectiveness of AdaStop in multiple use-cases, including toy examples and difficult cases such as Mujoco environments.Lire moins >
Langue :
Anglais
Collections :
Source :
Fichiers
- document
- Accès libre
- Accéder au document
- RR-9513.pdf
- Accès libre
- Accéder au document
- document
- Accès libre
- Accéder au document
- RR-9513.pdf
- Accès libre
- Accéder au document