Simpler PAC-Bayesian Bounds for Hostile Data
Type de document :
Article dans une revue scientifique
URL permanente :
Titre :
Simpler PAC-Bayesian Bounds for Hostile Data
Auteur(s) :
Titre de la revue :
Machine Learning
Numéro :
107
Pagination :
887–902
Éditeur :
Springer Verlag
Date de publication :
2018
ISSN :
0885-6125
Mot(s)-clé(s) :
F-divergence
Oracle inequalities
Dependent and unbounded data
PAC-Bayesian theory
Oracle inequalities
Dependent and unbounded data
PAC-Bayesian theory
Discipline(s) HAL :
Statistiques [stat]/Machine Learning [stat.ML]
Résumé en anglais : [en]
PAC-Bayesian learning bounds are of the utmost interest to the learning community. Their role is to connect the generalization ability of an aggregation distribution $\\rho$ to its empirical risk and to its Kullback-Leibler ...
Lire la suite >PAC-Bayesian learning bounds are of the utmost interest to the learning community. Their role is to connect the generalization ability of an aggregation distribution $\\rho$ to its empirical risk and to its Kullback-Leibler divergence with respect to some prior distribution $\\pi$. Unfortunately, most of the available bounds typically rely on heavy assumptions such as boundedness and independence of the observations. This paper aims at relaxing these constraints and provides PAC-Bayesian learning bounds that hold for dependent, heavy-tailed observations (hereafter referred to as \\emph{hostile data}). In these bounds the Kullack-Leibler divergence is replaced with a general version of Csisz\\'ar's $f$-divergence. We prove a general PAC-Bayesian bound, and show how to use it in various hostile settings.Lire moins >
Lire la suite >PAC-Bayesian learning bounds are of the utmost interest to the learning community. Their role is to connect the generalization ability of an aggregation distribution $\\rho$ to its empirical risk and to its Kullback-Leibler divergence with respect to some prior distribution $\\pi$. Unfortunately, most of the available bounds typically rely on heavy assumptions such as boundedness and independence of the observations. This paper aims at relaxing these constraints and provides PAC-Bayesian learning bounds that hold for dependent, heavy-tailed observations (hereafter referred to as \\emph{hostile data}). In these bounds the Kullack-Leibler divergence is replaced with a general version of Csisz\\'ar's $f$-divergence. We prove a general PAC-Bayesian bound, and show how to use it in various hostile settings.Lire moins >
Langue :
Anglais
Audience :
Internationale
Vulgarisation :
Non
Établissement(s) :
CNRS
Université de Lille
Université de Lille
Date de dépôt :
2020-06-08T14:11:19Z
2020-06-09T09:17:15Z
2020-06-09T09:17:15Z
Fichiers
- documen
- Accès libre
- Accéder au document