Differential Privacy has Bounded Impact ...
Document type :
Pré-publication ou Document de travail
Title :
Differential Privacy has Bounded Impact on Fairness in Classification
Author(s) :
Mangold, Paul [Auteur]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Machine Learning in Information Networks [MAGNET]
Perrot, Michaël [Auteur]
Machine Learning in Information Networks [MAGNET]
Bellet, Aurélien [Auteur]
Machine Learning in Information Networks [MAGNET]
Tommasi, Marc [Auteur]
Machine Learning in Information Networks [MAGNET]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Machine Learning in Information Networks [MAGNET]
Perrot, Michaël [Auteur]
Machine Learning in Information Networks [MAGNET]
Bellet, Aurélien [Auteur]
Machine Learning in Information Networks [MAGNET]
Tommasi, Marc [Auteur]

Machine Learning in Information Networks [MAGNET]
Publication date :
2022-10-28
English keyword(s) :
Fairness
Differential privacy
Classification
Machine Learning
Differential privacy
Classification
Machine Learning
HAL domain(s) :
Informatique [cs]/Apprentissage [cs.LG]
English abstract : [en]
We theoretically study the impact of differential privacy on fairness in classification. We prove that, given a class of models, popular group fairness measures are pointwise Lipschitz-continuous with respect to the ...
Show more >We theoretically study the impact of differential privacy on fairness in classification. We prove that, given a class of models, popular group fairness measures are pointwise Lipschitz-continuous with respect to the parameters of the model. This result is a consequence of a more general statement on accuracy conditioned on an arbitrary event (such as membership to a sensitive group), which may be of independent interest. We use the aforementioned Lipschitz property to prove a high probability bound showing that, given enough examples, the fairness level of private models is close to the one of their non-private counterparts.Show less >
Show more >We theoretically study the impact of differential privacy on fairness in classification. We prove that, given a class of models, popular group fairness measures are pointwise Lipschitz-continuous with respect to the parameters of the model. This result is a consequence of a more general statement on accuracy conditioned on an arbitrary event (such as membership to a sensitive group), which may be of independent interest. We use the aforementioned Lipschitz property to prove a high probability bound showing that, given enough examples, the fairness level of private models is close to the one of their non-private counterparts.Show less >
Language :
Anglais
Comment :
18 pages
Collections :
Source :
Files
- document
- Open access
- Access the document
- paper.pdf
- Open access
- Access the document
- 2210.16242
- Open access
- Access the document