Adaptive black-box optimization got easier: ...
Type de document :
Autre communication scientifique (congrès sans actes - poster - séminaire...): Communication dans un congrès avec actes
Titre :
Adaptive black-box optimization got easier: HCT only needs local smoothness
Auteur(s) :
Shang, Xuedong [Auteur]
Sequential Learning [SEQUEL]
Kaufmann, Emilie [Auteur]
Sequential Learning [SEQUEL]
Valko, Michal [Auteur]
Sequential Learning [SEQUEL]
Sequential Learning [SEQUEL]
Kaufmann, Emilie [Auteur]
Sequential Learning [SEQUEL]
Valko, Michal [Auteur]
Sequential Learning [SEQUEL]
Titre de la manifestation scientifique :
European Workshop on Reinforcement Learning
Ville :
Lille
Pays :
France
Date de début de la manifestation scientifique :
2018-10-01
Date de publication :
2018
Mot(s)-clé(s) en anglais :
continuously-armed bandits
global optimization
black-box optimization
global optimization
black-box optimization
Discipline(s) HAL :
Statistiques [stat]/Machine Learning [stat.ML]
Résumé en anglais : [en]
Hierarchical bandits is an approach for global optimization of extremely irregular functions. This paper provides new elements regarding POO, an adaptive meta-algorithm that does not require the knowledge of local smoothness ...
Lire la suite >Hierarchical bandits is an approach for global optimization of extremely irregular functions. This paper provides new elements regarding POO, an adaptive meta-algorithm that does not require the knowledge of local smoothness of the target function. We first highlight the fact that the subroutine algorithm used in POO should have a small regret under the assumption of local smoothness with respect to the chosen partitioning, which is unknown if it is satisfied by the standard subroutine HOO. In this work, we establish such regret guarantee for HCT, which is another hierarchical optimistic optimization algorithm that needs to know the smoothness. This confirms the validity of POO. We show that POO can be used with HCT as a subroutine with a regret upper bound that matches the one of best-known algorithms using the knowledge of smoothness up to a √ log n factor.Lire moins >
Lire la suite >Hierarchical bandits is an approach for global optimization of extremely irregular functions. This paper provides new elements regarding POO, an adaptive meta-algorithm that does not require the knowledge of local smoothness of the target function. We first highlight the fact that the subroutine algorithm used in POO should have a small regret under the assumption of local smoothness with respect to the chosen partitioning, which is unknown if it is satisfied by the standard subroutine HOO. In this work, we establish such regret guarantee for HCT, which is another hierarchical optimistic optimization algorithm that needs to know the smoothness. This confirms the validity of POO. We show that POO can be used with HCT as a subroutine with a regret upper bound that matches the one of best-known algorithms using the knowledge of smoothness up to a √ log n factor.Lire moins >
Langue :
Anglais
Comité de lecture :
Oui
Audience :
Internationale
Vulgarisation :
Non
Projet ANR :
Collections :
Source :
Fichiers
- https://hal.inria.fr/hal-01874637/document
- Accès libre
- Accéder au document
- https://hal.inria.fr/hal-01874637/document
- Accès libre
- Accéder au document
- https://hal.inria.fr/hal-01874637/document
- Accès libre
- Accéder au document
- document
- Accès libre
- Accéder au document
- shang2018adaptive.pdf
- Accès libre
- Accéder au document
- document
- Accès libre
- Accéder au document
- shang2018adaptive.pdf
- Accès libre
- Accéder au document