PAC-Bayes and Domain Adaptation
Type de document :
Article dans une revue scientifique: Article original
Titre :
PAC-Bayes and Domain Adaptation
Auteur(s) :
Germain, Pascal [Auteur]
Department of Computer Science and Software Engineering [Québec]
MOdel for Data Analysis and Learning [MODAL]
Habrard, Amaury [Auteur]
Laboratoire Hubert Curien [LabHC]
Laviolette, François [Auteur]
Department of Computer Science and Software Engineering [Québec]
Morvant, Emilie [Auteur]
Laboratoire Hubert Curien [LabHC]
Department of Computer Science and Software Engineering [Québec]
MOdel for Data Analysis and Learning [MODAL]
Habrard, Amaury [Auteur]
Laboratoire Hubert Curien [LabHC]
Laviolette, François [Auteur]
Department of Computer Science and Software Engineering [Québec]
Morvant, Emilie [Auteur]
Laboratoire Hubert Curien [LabHC]
Titre de la revue :
Neurocomputing
Pagination :
379-397
Éditeur :
Elsevier
Date de publication :
2020
ISSN :
0925-2312
Mot(s)-clé(s) en anglais :
PAC-Bayesian Theory
Domain Adaptation
Domain Adaptation
Discipline(s) HAL :
Statistiques [stat]/Machine Learning [stat.ML]
Résumé en anglais : [en]
We provide two main contributions in PAC-Bayesian theory for domain adaptation where the objective is to learn, from a source distribution, a well-performing majority vote on a different, but related, target distribution. ...
Lire la suite >We provide two main contributions in PAC-Bayesian theory for domain adaptation where the objective is to learn, from a source distribution, a well-performing majority vote on a different, but related, target distribution. Firstly, we propose an improvement of the previous approach we proposed in Germain et al. (2013), which relies on a novel distribution pseudodistance based on a disagreement averaging, allowing us to derive a new tighter domain adaptation bound for the target risk. While this bound stands in the spirit of common domain adaptation works, we derive a second bound (introduced in Germain et al., 2016) that brings a new perspective on domain adaptation by deriving an upper bound on the target risk where the distributions’ divergence—expressed as a ratio—controls the trade-off between a source error measure and the target voters’ disagreement. We discuss and compare both results, from which we obtain PAC-Bayesian generalization bounds. Furthermore, from the PAC-Bayesian specialization to linear classifiers, we infer two learning algorithms, and we evaluate them on real data.Lire moins >
Lire la suite >We provide two main contributions in PAC-Bayesian theory for domain adaptation where the objective is to learn, from a source distribution, a well-performing majority vote on a different, but related, target distribution. Firstly, we propose an improvement of the previous approach we proposed in Germain et al. (2013), which relies on a novel distribution pseudodistance based on a disagreement averaging, allowing us to derive a new tighter domain adaptation bound for the target risk. While this bound stands in the spirit of common domain adaptation works, we derive a second bound (introduced in Germain et al., 2016) that brings a new perspective on domain adaptation by deriving an upper bound on the target risk where the distributions’ divergence—expressed as a ratio—controls the trade-off between a source error measure and the target voters’ disagreement. We discuss and compare both results, from which we obtain PAC-Bayesian generalization bounds. Furthermore, from the PAC-Bayesian specialization to linear classifiers, we infer two learning algorithms, and we evaluate them on real data.Lire moins >
Langue :
Anglais
Comité de lecture :
Oui
Audience :
Internationale
Vulgarisation :
Non
Projet ANR :
Collections :
Source :
Fichiers
- document
- Accès libre
- Accéder au document
- main.pdf
- Accès libre
- Accéder au document
- 1707.05712
- Accès libre
- Accéder au document