Learning via Wasserstein-Based High ...
Document type :
Communication dans un congrès avec actes
Title :
Learning via Wasserstein-Based High Probability Generalisation Bounds
Author(s) :
Viallard, Paul [Auteur]
Statistical Machine Learning and Parsimony [SIERRA]
Haddouche, Maxime [Auteur]
The Inria London Programme [Inria-London]
MOdel for Data Analysis and Learning [MODAL]
Şimşekli, Umut [Auteur]
Statistical Machine Learning and Parsimony [SIERRA]
Guedj, Benjamin [Auteur]
University College of London [London] [UCL]
Department of Computer science [University College of London] [UCL-CS]
The Alan Turing Institute
The Inria London Programme [Inria-London]
MOdel for Data Analysis and Learning [MODAL]
Statistical Machine Learning and Parsimony [SIERRA]
Haddouche, Maxime [Auteur]
The Inria London Programme [Inria-London]
MOdel for Data Analysis and Learning [MODAL]
Şimşekli, Umut [Auteur]
Statistical Machine Learning and Parsimony [SIERRA]
Guedj, Benjamin [Auteur]
University College of London [London] [UCL]
Department of Computer science [University College of London] [UCL-CS]
The Alan Turing Institute
The Inria London Programme [Inria-London]
MOdel for Data Analysis and Learning [MODAL]
Conference title :
NeurIPS 2023 Workshop on Optimal Transport and Machine Learning (OTML'23)
City :
New Orleans
Country :
Etats-Unis d'Amérique
Start date of the conference :
2023-12-16
HAL domain(s) :
Statistiques [stat]/Machine Learning [stat.ML]
English abstract : [en]
Minimising upper bounds on the population risk or the generalisation gap has been widely used in structural risk minimisation (SRM) -- this is in particular at the core of PAC-Bayesian learning. Despite its successes and ...
Show more >Minimising upper bounds on the population risk or the generalisation gap has been widely used in structural risk minimisation (SRM) -- this is in particular at the core of PAC-Bayesian learning. Despite its successes and unfailing surge of interest in recent years, a limitation of the PAC-Bayesian framework is that most bounds involve a Kullback-Leibler (KL) divergence term (or its variations), which might exhibit erratic behavior and fail to capture the underlying geometric structure of the learning problem -- hence restricting its use in practical applications. As a remedy, recent studies have attempted to replace the KL divergence in the PAC-Bayesian bounds with the Wasserstein distance. Even though these bounds alleviated the aforementioned issues to a certain extent, they either hold in expectation, are for bounded losses, or are nontrivial to minimize in an SRM framework. In this work, we contribute to this line of research and prove novel Wasserstein distance-based PAC-Bayesian generalisation bounds for both batch learning with independent and identically distributed (i.i.d.) data, and online learning with potentially non-i.i.d. data. Contrary to previous art, our bounds are stronger in the sense that (i) they hold with high probability, (ii) they apply to unbounded (potentially heavy-tailed) losses, and (iii) they lead to optimizable training objectives that can be used in SRM. As a result we derive novel Wasserstein-based PAC-Bayesian learning algorithms and we illustrate their empirical advantage on a variety of experiments.Show less >
Show more >Minimising upper bounds on the population risk or the generalisation gap has been widely used in structural risk minimisation (SRM) -- this is in particular at the core of PAC-Bayesian learning. Despite its successes and unfailing surge of interest in recent years, a limitation of the PAC-Bayesian framework is that most bounds involve a Kullback-Leibler (KL) divergence term (or its variations), which might exhibit erratic behavior and fail to capture the underlying geometric structure of the learning problem -- hence restricting its use in practical applications. As a remedy, recent studies have attempted to replace the KL divergence in the PAC-Bayesian bounds with the Wasserstein distance. Even though these bounds alleviated the aforementioned issues to a certain extent, they either hold in expectation, are for bounded losses, or are nontrivial to minimize in an SRM framework. In this work, we contribute to this line of research and prove novel Wasserstein distance-based PAC-Bayesian generalisation bounds for both batch learning with independent and identically distributed (i.i.d.) data, and online learning with potentially non-i.i.d. data. Contrary to previous art, our bounds are stronger in the sense that (i) they hold with high probability, (ii) they apply to unbounded (potentially heavy-tailed) losses, and (iii) they lead to optimizable training objectives that can be used in SRM. As a result we derive novel Wasserstein-based PAC-Bayesian learning algorithms and we illustrate their empirical advantage on a variety of experiments.Show less >
Language :
Anglais
Peer reviewed article :
Oui
Audience :
Internationale
Popular science :
Non
ANR Project :
Collections :
Source :
Files
- document
- Open access
- Access the document
- 2306.04375%20%281%29.pdf
- Open access
- Access the document
- 2306.04375
- Open access
- Access the document
- document
- Open access
- Access the document
- 2306.04375%20%281%29.pdf
- Open access
- Access the document