Wasserstein PAC-Bayes Learning: Exploiting ...
Document type :
Pré-publication ou Document de travail
Title :
Wasserstein PAC-Bayes Learning: Exploiting Optimisation Guarantees to Explain Generalisation
Author(s) :
Haddouche, Maxime [Auteur]
The Inria London Programme [Inria-London]
MOdel for Data Analysis and Learning [MODAL]
Guedj, Benjamin [Auteur]
University College of London [London] [UCL]
Department of Computer science [University College of London] [UCL-CS]
The Alan Turing Institute
The Inria London Programme [Inria-London]
MOdel for Data Analysis and Learning [MODAL]
The Inria London Programme [Inria-London]
MOdel for Data Analysis and Learning [MODAL]
Guedj, Benjamin [Auteur]
University College of London [London] [UCL]
Department of Computer science [University College of London] [UCL-CS]
The Alan Turing Institute
The Inria London Programme [Inria-London]
MOdel for Data Analysis and Learning [MODAL]
Publication date :
2023-04-14
HAL domain(s) :
Statistiques [stat]/Machine Learning [stat.ML]
Informatique [cs]/Apprentissage [cs.LG]
Informatique [cs]/Apprentissage [cs.LG]
English abstract : [en]
PAC-Bayes learning is an established framework to both assess the generalisation ability of learning algorithms, and design new learning algorithm by exploiting generalisation bounds as training objectives. Most of the ...
Show more >PAC-Bayes learning is an established framework to both assess the generalisation ability of learning algorithms, and design new learning algorithm by exploiting generalisation bounds as training objectives. Most of the exisiting bounds involve a \emph{Kullback-Leibler} (KL) divergence, which fails to capture the geometric properties of the loss function which are often useful in optimisation. We address this by extending the emerging \emph{Wasserstein PAC-Bayes} theory. We develop new PAC-Bayes bounds with Wasserstein distances replacing the usual KL, and demonstrate that sound optimisation guarantees translate to good generalisation abilities. In particular we provide generalisation bounds for the \emph{Bures-Wasserstein SGD} by exploiting its optimisation properties.Show less >
Show more >PAC-Bayes learning is an established framework to both assess the generalisation ability of learning algorithms, and design new learning algorithm by exploiting generalisation bounds as training objectives. Most of the exisiting bounds involve a \emph{Kullback-Leibler} (KL) divergence, which fails to capture the geometric properties of the loss function which are often useful in optimisation. We address this by extending the emerging \emph{Wasserstein PAC-Bayes} theory. We develop new PAC-Bayes bounds with Wasserstein distances replacing the usual KL, and demonstrate that sound optimisation guarantees translate to good generalisation abilities. In particular we provide generalisation bounds for the \emph{Bures-Wasserstein SGD} by exploiting its optimisation properties.Show less >
Language :
Anglais
Collections :
Source :
Files
- document
- Open access
- Access the document
- main.pdf
- Open access
- Access the document
- 2304.07048
- Open access
- Access the document