Streaming kernel regression with provably ...
Type de document :
Compte-rendu et recension critique d'ouvrage
Titre :
Streaming kernel regression with provably adaptive mean, variance, and regularization
Auteur(s) :
Durand, Audrey [Auteur]
Université Laval [Québec] [ULaval]
Maillard, Odalric Ambrym [Auteur]
Sequential Learning [SEQUEL]
Pineau, Joelle [Auteur]
McGill University = Université McGill [Montréal, Canada]
Université Laval [Québec] [ULaval]
Maillard, Odalric Ambrym [Auteur]
Sequential Learning [SEQUEL]
Pineau, Joelle [Auteur]
McGill University = Université McGill [Montréal, Canada]
Titre de la revue :
Journal of Machine Learning Research
Pagination :
1 - 48
Éditeur :
Microtome Publishing
Date de publication :
2018
ISSN :
1532-4435
Mot(s)-clé(s) en anglais :
kernel
regression
online learning
adaptive tuning
bandits
regression
online learning
adaptive tuning
bandits
Discipline(s) HAL :
Mathématiques [math]/Statistiques [math.ST]
Statistiques [stat]/Machine Learning [stat.ML]
Statistiques [stat]/Machine Learning [stat.ML]
Résumé en anglais : [en]
We consider the problem of streaming kernel regression, when the observations arrive sequentially and the goal is to recover the underlying mean function, assumed to belong to an RKHS. The variance of the noise is not ...
Lire la suite >We consider the problem of streaming kernel regression, when the observations arrive sequentially and the goal is to recover the underlying mean function, assumed to belong to an RKHS. The variance of the noise is not assumed to be known. In this context, we tackle the problem of tuning the regularization parameter adaptively at each time step, while maintaining tight confidence bounds estimates on the value of the mean function at each point. To this end, we first generalize existing results for finite-dimensional linear regression with fixed regularization and known variance to the kernel setup with a regularization parameter allowed to be a measurable function of past observations. Then, using appropriate self-normalized inequalities we build upper and lower bound estimates for the variance, leading to Bersntein-like concentration bounds. The later is used in order to define the adaptive regularization. The bounds resulting from our technique are valid uniformly over all observation points and all time steps, and are compared against the literature with numerical experiments. Finally, the potential of these tools is illustrated by an application to kernelized bandits, where we revisit the Kernel UCB and Kernel Thompson Sampling procedures, and show the benefits of the novel adaptive kernel tuning strategy.Lire moins >
Lire la suite >We consider the problem of streaming kernel regression, when the observations arrive sequentially and the goal is to recover the underlying mean function, assumed to belong to an RKHS. The variance of the noise is not assumed to be known. In this context, we tackle the problem of tuning the regularization parameter adaptively at each time step, while maintaining tight confidence bounds estimates on the value of the mean function at each point. To this end, we first generalize existing results for finite-dimensional linear regression with fixed regularization and known variance to the kernel setup with a regularization parameter allowed to be a measurable function of past observations. Then, using appropriate self-normalized inequalities we build upper and lower bound estimates for the variance, leading to Bersntein-like concentration bounds. The later is used in order to define the adaptive regularization. The bounds resulting from our technique are valid uniformly over all observation points and all time steps, and are compared against the literature with numerical experiments. Finally, the potential of these tools is illustrated by an application to kernelized bandits, where we revisit the Kernel UCB and Kernel Thompson Sampling procedures, and show the benefits of the novel adaptive kernel tuning strategy.Lire moins >
Langue :
Anglais
Vulgarisation :
Non
Collections :
Source :
Fichiers
- https://hal.archives-ouvertes.fr/hal-01927007/document
- Accès libre
- Accéder au document
- http://arxiv.org/pdf/1708.00768
- Accès libre
- Accéder au document
- https://hal.archives-ouvertes.fr/hal-01927007/document
- Accès libre
- Accéder au document
- https://hal.archives-ouvertes.fr/hal-01927007/document
- Accès libre
- Accéder au document
- document
- Accès libre
- Accéder au document
- 1708.00768.pdf
- Accès libre
- Accéder au document
- 1708.00768
- Accès libre
- Accéder au document
- document
- Accès libre
- Accéder au document
- 1708.00768.pdf
- Accès libre
- Accéder au document