Gaussian process optimization with adaptive ...
Type de document :
Communication dans un congrès avec actes
Titre :
Gaussian process optimization with adaptive sketching: Scalable and no regret
Auteur(s) :
Calandriello, Daniele [Auteur]
Carratino, Luigi [Auteur]
Dipartimento di Informatica, Bioingegneria, Robotica e Ingegneria dei Sistemi [Genova] [DIBRIS]
Lazaric, Alessandro [Auteur]
Facebook
Valko, Michal [Auteur]
Sequential Learning [SEQUEL]
Rosasco, Lorenzo [Auteur]
Carratino, Luigi [Auteur]
Dipartimento di Informatica, Bioingegneria, Robotica e Ingegneria dei Sistemi [Genova] [DIBRIS]
Lazaric, Alessandro [Auteur]
Valko, Michal [Auteur]
Sequential Learning [SEQUEL]
Rosasco, Lorenzo [Auteur]
Titre de la manifestation scientifique :
Conference on Learning Theory
Ville :
Phoenix
Pays :
Etats-Unis d'Amérique
Date de début de la manifestation scientifique :
2019
Mot(s)-clé(s) en anglais :
kernelized linear bandits
sparse Gaussian process optimization
regret
sketching
Bayesian optimization
black-box optimization
variance starvation
sparse Gaussian process optimization
regret
sketching
Bayesian optimization
black-box optimization
variance starvation
Discipline(s) HAL :
Statistiques [stat]/Machine Learning [stat.ML]
Résumé en anglais : [en]
Gaussian processes (GP) are a stochastic processes, used as Bayesian approach for the optimization of black-box functions. Despite their effectiveness in simple problems, GP-based algorithms hardly scale to high-dimensional ...
Lire la suite >Gaussian processes (GP) are a stochastic processes, used as Bayesian approach for the optimization of black-box functions. Despite their effectiveness in simple problems, GP-based algorithms hardly scale to high-dimensional functions, as their per-iteration time and space cost is at least quadratic in the number of dimensions d and iterations t. Given a set of A alternatives to choose from, the overall runtime O(t 3 A) is prohibitive. In this paper, we introduce BKB (budgeted kernelized bandit), a new approximate GP algorithm for optimization under bandit feedback that achieves near-optimal regret (and hence near-optimal convergence rate) with near-constant per-iteration complexity and remarkably no assumption on the input space or covariance of the GP. We combine a kernelized linear bandit algorithm (GP-UCB) leverage score sampling as a randomized matrix sketching and prove that selecting inducing points based on their posterior variance gives an accurate low-rank approximation of the GP, preserving variance estimates and confidence intervals. As a consequence, BKB does not suffer from variance starvation, an important problem faced by many previous sparse GP approximations. Moreover, we show that our procedure selects at most O(d eff) points, where d eff is the effective dimension of the explored space, which is typically much smaller than both d and t. This greatly reduces the dimensionality of the problem, thus leading to a O(T Ad 2 eff) runtime and O(Ad eff) space complexity.Lire moins >
Lire la suite >Gaussian processes (GP) are a stochastic processes, used as Bayesian approach for the optimization of black-box functions. Despite their effectiveness in simple problems, GP-based algorithms hardly scale to high-dimensional functions, as their per-iteration time and space cost is at least quadratic in the number of dimensions d and iterations t. Given a set of A alternatives to choose from, the overall runtime O(t 3 A) is prohibitive. In this paper, we introduce BKB (budgeted kernelized bandit), a new approximate GP algorithm for optimization under bandit feedback that achieves near-optimal regret (and hence near-optimal convergence rate) with near-constant per-iteration complexity and remarkably no assumption on the input space or covariance of the GP. We combine a kernelized linear bandit algorithm (GP-UCB) leverage score sampling as a randomized matrix sketching and prove that selecting inducing points based on their posterior variance gives an accurate low-rank approximation of the GP, preserving variance estimates and confidence intervals. As a consequence, BKB does not suffer from variance starvation, an important problem faced by many previous sparse GP approximations. Moreover, we show that our procedure selects at most O(d eff) points, where d eff is the effective dimension of the explored space, which is typically much smaller than both d and t. This greatly reduces the dimensionality of the problem, thus leading to a O(T Ad 2 eff) runtime and O(Ad eff) space complexity.Lire moins >
Langue :
Anglais
Comité de lecture :
Oui
Audience :
Internationale
Vulgarisation :
Non
Collections :
Source :
Fichiers
- https://hal.inria.fr/hal-02144311/document
- Accès libre
- Accéder au document
- https://hal.inria.fr/hal-02144311/document
- Accès libre
- Accéder au document
- https://hal.inria.fr/hal-02144311/document
- Accès libre
- Accéder au document
- document
- Accès libre
- Accéder au document
- calandriello2019gaussian.pdf
- Accès libre
- Accéder au document
- document
- Accès libre
- Accéder au document
- calandriello2019gaussian.pdf
- Accès libre
- Accéder au document