Gaussian process optimization with adaptive ...
Document type :
Communication dans un congrès avec actes
Title :
Gaussian process optimization with adaptive sketching: Scalable and no regret
Author(s) :
Calandriello, Daniele [Auteur]
Istituto Italiano di Tecnologia [IIT]
Carratino, Luigi [Auteur]
Dipartimento di Informatica, Bioingegneria, Robotica e Ingegneria dei Sistemi [Genova] [DIBRIS]
Lazaric, Alessandro [Auteur]
Facebook
Valko, Michal [Auteur]
Sequential Learning [SEQUEL]
Rosasco, Lorenzo [Auteur]
Istituto Italiano di Tecnologia [IIT]
Istituto Italiano di Tecnologia [IIT]
Carratino, Luigi [Auteur]
Dipartimento di Informatica, Bioingegneria, Robotica e Ingegneria dei Sistemi [Genova] [DIBRIS]
Lazaric, Alessandro [Auteur]
Valko, Michal [Auteur]
Sequential Learning [SEQUEL]
Rosasco, Lorenzo [Auteur]
Istituto Italiano di Tecnologia [IIT]
Conference title :
Conference on Learning Theory
City :
Phoenix
Country :
Etats-Unis d'Amérique
Start date of the conference :
2019
English keyword(s) :
kernelized linear bandits
sparse Gaussian process optimization
regret
sketching
Bayesian optimization
black-box optimization
variance starvation
sparse Gaussian process optimization
regret
sketching
Bayesian optimization
black-box optimization
variance starvation
HAL domain(s) :
Statistiques [stat]/Machine Learning [stat.ML]
English abstract : [en]
Gaussian processes (GP) are a stochastic processes, used as Bayesian approach for the optimization of black-box functions. Despite their effectiveness in simple problems, GP-based algorithms hardly scale to high-dimensional ...
Show more >Gaussian processes (GP) are a stochastic processes, used as Bayesian approach for the optimization of black-box functions. Despite their effectiveness in simple problems, GP-based algorithms hardly scale to high-dimensional functions, as their per-iteration time and space cost is at least quadratic in the number of dimensions d and iterations t. Given a set of A alternatives to choose from, the overall runtime O(t 3 A) is prohibitive. In this paper, we introduce BKB (budgeted kernelized bandit), a new approximate GP algorithm for optimization under bandit feedback that achieves near-optimal regret (and hence near-optimal convergence rate) with near-constant per-iteration complexity and remarkably no assumption on the input space or covariance of the GP. We combine a kernelized linear bandit algorithm (GP-UCB) leverage score sampling as a randomized matrix sketching and prove that selecting inducing points based on their posterior variance gives an accurate low-rank approximation of the GP, preserving variance estimates and confidence intervals. As a consequence, BKB does not suffer from variance starvation, an important problem faced by many previous sparse GP approximations. Moreover, we show that our procedure selects at most O(d eff) points, where d eff is the effective dimension of the explored space, which is typically much smaller than both d and t. This greatly reduces the dimensionality of the problem, thus leading to a O(T Ad 2 eff) runtime and O(Ad eff) space complexity.Show less >
Show more >Gaussian processes (GP) are a stochastic processes, used as Bayesian approach for the optimization of black-box functions. Despite their effectiveness in simple problems, GP-based algorithms hardly scale to high-dimensional functions, as their per-iteration time and space cost is at least quadratic in the number of dimensions d and iterations t. Given a set of A alternatives to choose from, the overall runtime O(t 3 A) is prohibitive. In this paper, we introduce BKB (budgeted kernelized bandit), a new approximate GP algorithm for optimization under bandit feedback that achieves near-optimal regret (and hence near-optimal convergence rate) with near-constant per-iteration complexity and remarkably no assumption on the input space or covariance of the GP. We combine a kernelized linear bandit algorithm (GP-UCB) leverage score sampling as a randomized matrix sketching and prove that selecting inducing points based on their posterior variance gives an accurate low-rank approximation of the GP, preserving variance estimates and confidence intervals. As a consequence, BKB does not suffer from variance starvation, an important problem faced by many previous sparse GP approximations. Moreover, we show that our procedure selects at most O(d eff) points, where d eff is the effective dimension of the explored space, which is typically much smaller than both d and t. This greatly reduces the dimensionality of the problem, thus leading to a O(T Ad 2 eff) runtime and O(Ad eff) space complexity.Show less >
Language :
Anglais
Peer reviewed article :
Oui
Audience :
Internationale
Popular science :
Non
Collections :
Source :
Files
- https://hal.inria.fr/hal-02144311/document
- Open access
- Access the document
- https://hal.inria.fr/hal-02144311/document
- Open access
- Access the document
- https://hal.inria.fr/hal-02144311/document
- Open access
- Access the document
- document
- Open access
- Access the document
- calandriello2019gaussian.pdf
- Open access
- Access the document
- document
- Open access
- Access the document
- calandriello2019gaussian.pdf
- Open access
- Access the document