A PAC-Bayes bound for deterministic classifiers
Document type :
Pré-publication ou Document de travail
Title :
A PAC-Bayes bound for deterministic classifiers
Author(s) :
Clerico, Eugenio [Auteur]
University of Oxford
Deligiannidis, George [Auteur]
University of Oxford
Guedj, Benjamin [Auteur]
University College of London [London] [UCL]
Department of Computer science [University College of London] [UCL-CS]
The Inria London Programme [Inria-London]
MOdel for Data Analysis and Learning [MODAL]
Doucet, Arnaud [Auteur]
University of Oxford
University of Oxford
Deligiannidis, George [Auteur]
University of Oxford
Guedj, Benjamin [Auteur]
University College of London [London] [UCL]
Department of Computer science [University College of London] [UCL-CS]
The Inria London Programme [Inria-London]
MOdel for Data Analysis and Learning [MODAL]
Doucet, Arnaud [Auteur]
University of Oxford
HAL domain(s) :
Statistiques [stat]/Machine Learning [stat.ML]
Informatique [cs]/Apprentissage [cs.LG]
Statistiques [stat]/Théorie [stat.TH]
Informatique [cs]/Apprentissage [cs.LG]
Statistiques [stat]/Théorie [stat.TH]
English abstract : [en]
We establish a disintegrated PAC-Bayesian bound, for classifiers that are trained via continuous-time (non-stochastic) gradient descent. Contrarily to what is standard in the PAC-Bayesian setting, our result applies to a ...
Show more >We establish a disintegrated PAC-Bayesian bound, for classifiers that are trained via continuous-time (non-stochastic) gradient descent. Contrarily to what is standard in the PAC-Bayesian setting, our result applies to a training algorithm that is deterministic, conditioned on a random initialisation, without requiring any $\textit{de-randomisation}$ step. We provide a broad discussion of the main features of the bound that we propose, and we study analytically and empirically its behaviour on linear models, finding promising results.Show less >
Show more >We establish a disintegrated PAC-Bayesian bound, for classifiers that are trained via continuous-time (non-stochastic) gradient descent. Contrarily to what is standard in the PAC-Bayesian setting, our result applies to a training algorithm that is deterministic, conditioned on a random initialisation, without requiring any $\textit{de-randomisation}$ step. We provide a broad discussion of the main features of the bound that we propose, and we study analytically and empirically its behaviour on linear models, finding promising results.Show less >
Language :
Anglais
Collections :
Source :
Files
- document
- Open access
- Access the document
- 2209.02525.pdf
- Open access
- Access the document
- 2209.02525
- Open access
- Access the document