Adaptive multi-fidelity optimization with ...
Type de document :
Communication dans un congrès avec actes
Titre :
Adaptive multi-fidelity optimization with fast learning rates
Auteur(s) :
Fiegel, Côme [Auteur]
Scool [Scool]
Département d'informatique - ENS-PSL [DI-ENS]
Gabillon, Victor [Auteur]
Huawei R&D [United Kingdom]
Valko, Michal [Auteur]
Scool [Scool]
Département d'informatique - ENS-PSL [DI-ENS]
Gabillon, Victor [Auteur]
Huawei R&D [United Kingdom]
Valko, Michal [Auteur]

Titre de la manifestation scientifique :
International Conference on Artificial Intelligence and Statistics
Ville :
Palermo
Pays :
Italie
Date de début de la manifestation scientifique :
2020
Discipline(s) HAL :
Statistiques [stat]/Machine Learning [stat.ML]
Résumé en anglais : [en]
In multi-fidelity optimization, we have access to biased approximations of varying costs of the target function. In this work, we study the setting of optimizing a locally smooth function with a limited budget Λ, where the ...
Lire la suite >In multi-fidelity optimization, we have access to biased approximations of varying costs of the target function. In this work, we study the setting of optimizing a locally smooth function with a limited budget Λ, where the learner has to make a trade-off between the cost and the bias of these approximations. We first prove lower bounds for the simple regret under different assumptions on the fidelities, based on a cost-to-bias function. We then present the Kometo algorithm which achieves, with additional logarithmic factors, the same rates without any knowledge of the function smoothness and fidelity assumptions and improving prior results. Finally, we empirically show that our algorithm outperforms prior multi-fidelity optimization methods without the knowledge of problem-dependent parameters.Lire moins >
Lire la suite >In multi-fidelity optimization, we have access to biased approximations of varying costs of the target function. In this work, we study the setting of optimizing a locally smooth function with a limited budget Λ, where the learner has to make a trade-off between the cost and the bias of these approximations. We first prove lower bounds for the simple regret under different assumptions on the fidelities, based on a cost-to-bias function. We then present the Kometo algorithm which achieves, with additional logarithmic factors, the same rates without any knowledge of the function smoothness and fidelity assumptions and improving prior results. Finally, we empirically show that our algorithm outperforms prior multi-fidelity optimization methods without the knowledge of problem-dependent parameters.Lire moins >
Langue :
Anglais
Comité de lecture :
Oui
Audience :
Internationale
Vulgarisation :
Non
Collections :
Source :
Fichiers
- https://hal.inria.fr/hal-03288879/document
- Accès libre
- Accéder au document
- https://hal.inria.fr/hal-03288879/document
- Accès libre
- Accéder au document
- https://hal.inria.fr/hal-03288879/document
- Accès libre
- Accéder au document
- document
- Accès libre
- Accéder au document
- fiegel2020adaptive.pdf
- Accès libre
- Accéder au document
- document
- Accès libre
- Accéder au document
- fiegel2020adaptive.pdf
- Accès libre
- Accéder au document