Differential Privacy has Bounded Impact ...
Type de document :
Pré-publication ou Document de travail
Titre :
Differential Privacy has Bounded Impact on Fairness in Classification
Auteur(s) :
Mangold, Paul [Auteur]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Machine Learning in Information Networks [MAGNET]
Perrot, Michaël [Auteur]
Machine Learning in Information Networks [MAGNET]
Bellet, Aurélien [Auteur]
Machine Learning in Information Networks [MAGNET]
Tommasi, Marc [Auteur]
Machine Learning in Information Networks [MAGNET]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Machine Learning in Information Networks [MAGNET]
Perrot, Michaël [Auteur]
Machine Learning in Information Networks [MAGNET]
Bellet, Aurélien [Auteur]
Machine Learning in Information Networks [MAGNET]
Tommasi, Marc [Auteur]

Machine Learning in Information Networks [MAGNET]
Date de publication :
2022-10-28
Mot(s)-clé(s) en anglais :
Fairness
Differential privacy
Classification
Machine Learning
Differential privacy
Classification
Machine Learning
Discipline(s) HAL :
Informatique [cs]/Apprentissage [cs.LG]
Résumé en anglais : [en]
We theoretically study the impact of differential privacy on fairness in classification. We prove that, given a class of models, popular group fairness measures are pointwise Lipschitz-continuous with respect to the ...
Lire la suite >We theoretically study the impact of differential privacy on fairness in classification. We prove that, given a class of models, popular group fairness measures are pointwise Lipschitz-continuous with respect to the parameters of the model. This result is a consequence of a more general statement on accuracy conditioned on an arbitrary event (such as membership to a sensitive group), which may be of independent interest. We use the aforementioned Lipschitz property to prove a high probability bound showing that, given enough examples, the fairness level of private models is close to the one of their non-private counterparts.Lire moins >
Lire la suite >We theoretically study the impact of differential privacy on fairness in classification. We prove that, given a class of models, popular group fairness measures are pointwise Lipschitz-continuous with respect to the parameters of the model. This result is a consequence of a more general statement on accuracy conditioned on an arbitrary event (such as membership to a sensitive group), which may be of independent interest. We use the aforementioned Lipschitz property to prove a high probability bound showing that, given enough examples, the fairness level of private models is close to the one of their non-private counterparts.Lire moins >
Langue :
Anglais
Commentaire :
18 pages
Collections :
Source :
Fichiers
- document
- Accès libre
- Accéder au document
- paper.pdf
- Accès libre
- Accéder au document
- 2210.16242
- Accès libre
- Accéder au document