On Margins and Derandomisation in PAC-Bayes
Document type :
Communication dans un congrès avec actes
Title :
On Margins and Derandomisation in PAC-Bayes
Author(s) :
Biggs, Felix [Auteur]
The Inria London Programme [Inria-London]
Department of Computer science [University College of London] [UCL-CS]
Guedj, Benjamin [Auteur]
The Inria London Programme [Inria-London]
MOdel for Data Analysis and Learning [MODAL]
Inria-CWI [Inria-CWI]
Department of Computer science [University College of London] [UCL-CS]
The Inria London Programme [Inria-London]
Department of Computer science [University College of London] [UCL-CS]
Guedj, Benjamin [Auteur]

The Inria London Programme [Inria-London]
MOdel for Data Analysis and Learning [MODAL]
Inria-CWI [Inria-CWI]
Department of Computer science [University College of London] [UCL-CS]
Conference title :
AISTATS 2022 - 25th International Conference on Artificial Intelligence and Statistics
City :
Valencia
Country :
Espagne
Start date of the conference :
2022-03-28
HAL domain(s) :
Informatique [cs]/Apprentissage [cs.LG]
Statistiques [stat]/Machine Learning [stat.ML]
Statistiques [stat]/Théorie [stat.TH]
Statistiques [stat]/Machine Learning [stat.ML]
Statistiques [stat]/Théorie [stat.TH]
English abstract : [en]
We give a general recipe for derandomising PAC-Bayesian bounds using margins, with the critical ingredient being that our randomised predictions concentrate around some value. The tools we develop traightforwardly lead to ...
Show more >We give a general recipe for derandomising PAC-Bayesian bounds using margins, with the critical ingredient being that our randomised predictions concentrate around some value. The tools we develop traightforwardly lead to margin bounds for various classifiers, including linear prediction—a class that includes boosting and the support vector machine—single-hidden-layer neural networks with an unusual erf activation function, and deep ReLU networks. Further, we extend to partially-derandomised predictors where only some of the randomness is removed, letting us extend bounds to cases where the concentration properties of our predictors are otherwise poor.Show less >
Show more >We give a general recipe for derandomising PAC-Bayesian bounds using margins, with the critical ingredient being that our randomised predictions concentrate around some value. The tools we develop traightforwardly lead to margin bounds for various classifiers, including linear prediction—a class that includes boosting and the support vector machine—single-hidden-layer neural networks with an unusual erf activation function, and deep ReLU networks. Further, we extend to partially-derandomised predictors where only some of the randomness is removed, letting us extend bounds to cases where the concentration properties of our predictors are otherwise poor.Show less >
Language :
Anglais
Peer reviewed article :
Oui
Audience :
Internationale
Popular science :
Non
ANR Project :
Collections :
Source :
Files
- document
- Open access
- Access the document
- 2107.03955.pdf
- Open access
- Access the document
- 2107.03955
- Open access
- Access the document