Deep Reinforcement Learning and the Deadly Triad
Type de document :
Pré-publication ou Document de travail
Titre :
Deep Reinforcement Learning and the Deadly Triad
Auteur(s) :
van Hasselt, Hado [Auteur]
DeepMind [London]
Doron, Yotam [Auteur]
DeepMind [London]
Strub, Florian [Auteur]
Sequential Learning [SEQUEL]
Université de Lille
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
DeepMind [London]
Hessel, Matteo [Auteur]
DeepMind [London]
Sonnerat, Nicolas [Auteur]
DeepMind [London]
Modayil, Joseph [Auteur]
DeepMind [London]
DeepMind [London]
Doron, Yotam [Auteur]
DeepMind [London]
Strub, Florian [Auteur]
Sequential Learning [SEQUEL]
Université de Lille
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
DeepMind [London]
Hessel, Matteo [Auteur]
DeepMind [London]
Sonnerat, Nicolas [Auteur]
DeepMind [London]
Modayil, Joseph [Auteur]
DeepMind [London]
Discipline(s) HAL :
Informatique [cs]/Réseau de neurones [cs.NE]
Informatique [cs]/Intelligence artificielle [cs.AI]
Informatique [cs]/Intelligence artificielle [cs.AI]
Résumé en anglais : [en]
We know from reinforcement learning theory that temporal difference learning can fail in certain cases. Sutton and Barto (2018) identify a deadly triad of function approximation, bootstrapping, and off-policy learning. ...
Lire la suite >We know from reinforcement learning theory that temporal difference learning can fail in certain cases. Sutton and Barto (2018) identify a deadly triad of function approximation, bootstrapping, and off-policy learning. When these three properties are combined, learning can diverge with the value estimates becoming unbounded. However, several algorithms successfully combine these three properties, which indicates that there is at least a partial gap in our understanding. In this work, we investigate the impact of the deadly triad in practice, in the context of a family of popular deep reinforcement learning models - deep Q-networks trained with experience replay - analysing how the components of this system play a role in the emergence of the deadly triad, and in the agent's performanceLire moins >
Lire la suite >We know from reinforcement learning theory that temporal difference learning can fail in certain cases. Sutton and Barto (2018) identify a deadly triad of function approximation, bootstrapping, and off-policy learning. When these three properties are combined, learning can diverge with the value estimates becoming unbounded. However, several algorithms successfully combine these three properties, which indicates that there is at least a partial gap in our understanding. In this work, we investigate the impact of the deadly triad in practice, in the context of a family of popular deep reinforcement learning models - deep Q-networks trained with experience replay - analysing how the components of this system play a role in the emergence of the deadly triad, and in the agent's performanceLire moins >
Langue :
Anglais
Collections :
Source :
Fichiers
- http://arxiv.org/pdf/1812.02648
- Accès libre
- Accéder au document
- 1812.02648
- Accès libre
- Accéder au document