Learning Stochastic Majority Votes by ...
Type de document :
Autre communication scientifique (congrès sans actes - poster - séminaire...): Communication dans un congrès avec actes
Titre :
Learning Stochastic Majority Votes by Minimizing a PAC-Bayes Generalization Bound
Auteur(s) :
Zantedeschi, Valentina [Auteur]
MOdel for Data Analysis and Learning [MODAL]
The Inria London Programme [Inria-London]
Viallard, Paul [Auteur]
Laboratoire Hubert Curien [LabHC]
Morvant, Emilie [Auteur]
Laboratoire Hubert Curien [LabHC]
Emonet, Rémi [Auteur]
Laboratoire Hubert Curien [LabHC]
Habrard, Amaury [Auteur]
Laboratoire Hubert Curien [LabHC]
Germain, Pascal [Auteur]
Université Laval [Québec] [ULaval]
Guedj, Benjamin [Auteur]
Department of Computer science [University College of London] [UCL-CS]
Inria-CWI [Inria-CWI]
MOdel for Data Analysis and Learning [MODAL]
MOdel for Data Analysis and Learning [MODAL]
The Inria London Programme [Inria-London]
Viallard, Paul [Auteur]
Laboratoire Hubert Curien [LabHC]
Morvant, Emilie [Auteur]
Laboratoire Hubert Curien [LabHC]
Emonet, Rémi [Auteur]
Laboratoire Hubert Curien [LabHC]
Habrard, Amaury [Auteur]
Laboratoire Hubert Curien [LabHC]
Germain, Pascal [Auteur]
Université Laval [Québec] [ULaval]
Guedj, Benjamin [Auteur]
Department of Computer science [University College of London] [UCL-CS]
Inria-CWI [Inria-CWI]
MOdel for Data Analysis and Learning [MODAL]
Titre de la manifestation scientifique :
CAp 2022
Ville :
Vannes
Pays :
France
Date de début de la manifestation scientifique :
2022-07-05
Discipline(s) HAL :
Statistiques [stat]/Machine Learning [stat.ML]
Résumé en anglais : [en]
We investigate a stochastic counterpart of majority votes over finite ensembles of classifiers, and study its generalization properties. While our approach holds for arbitrary distributions, we instantiate it with Dirichlet ...
Lire la suite >We investigate a stochastic counterpart of majority votes over finite ensembles of classifiers, and study its generalization properties. While our approach holds for arbitrary distributions, we instantiate it with Dirichlet distributions: this allows for a closed-form and differentiable expression for the expected risk, which then turns the generalization bound into a tractable training objective. The resulting stochastic majority vote learning algorithm achieves state-of-the-art accuracy and benefits from (non-vacuous) tight generalization bounds, in a series of numerical experiments when compared to competing algorithms which also minimize PAC-Bayes objectives - both with uninformed (data-independent) and informed (data-dependent) priors.Lire moins >
Lire la suite >We investigate a stochastic counterpart of majority votes over finite ensembles of classifiers, and study its generalization properties. While our approach holds for arbitrary distributions, we instantiate it with Dirichlet distributions: this allows for a closed-form and differentiable expression for the expected risk, which then turns the generalization bound into a tractable training objective. The resulting stochastic majority vote learning algorithm achieves state-of-the-art accuracy and benefits from (non-vacuous) tight generalization bounds, in a series of numerical experiments when compared to competing algorithms which also minimize PAC-Bayes objectives - both with uninformed (data-independent) and informed (data-dependent) priors.Lire moins >
Langue :
Anglais
Comité de lecture :
Oui
Audience :
Nationale
Vulgarisation :
Non
Projet ANR :
Collections :
Source :