• English
    • français
  • Help
  •  | 
  • Contact
  •  | 
  • About
  •  | 
  • Login
  • HAL portal
  •  | 
  • Pages Pro
  • EN
  •  / 
  • FR
View Item 
  •   LillOA Home
  • Liste des unités
  • Centre de Recherche en Informatique, Signal et Automatique de Lille (CRIStAL) - UMR 9189
  • View Item
  •   LillOA Home
  • Liste des unités
  • Centre de Recherche en Informatique, Signal et Automatique de Lille (CRIStAL) - UMR 9189
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Differential Privacy has Bounded Impact ...
  • BibTeX
  • CSV
  • Excel
  • RIS

Document type :
Pré-publication ou Document de travail
Title :
Differential Privacy has Bounded Impact on Fairness in Classification
Author(s) :
Mangold, Paul [Auteur]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Machine Learning in Information Networks [MAGNET]
Perrot, Michaël [Auteur]
Machine Learning in Information Networks [MAGNET]
Bellet, Aurélien [Auteur]
Machine Learning in Information Networks [MAGNET]
Tommasi, Marc [Auteur] refId
Machine Learning in Information Networks [MAGNET]
Publication date :
2022-10-28
English keyword(s) :
Fairness
Differential privacy
Classification
Machine Learning
HAL domain(s) :
Informatique [cs]/Apprentissage [cs.LG]
English abstract : [en]
We theoretically study the impact of differential privacy on fairness in classification. We prove that, given a class of models, popular group fairness measures are pointwise Lipschitz-continuous with respect to the ...
Show more >
We theoretically study the impact of differential privacy on fairness in classification. We prove that, given a class of models, popular group fairness measures are pointwise Lipschitz-continuous with respect to the parameters of the model. This result is a consequence of a more general statement on accuracy conditioned on an arbitrary event (such as membership to a sensitive group), which may be of independent interest. We use the aforementioned Lipschitz property to prove a high probability bound showing that, given enough examples, the fairness level of private models is close to the one of their non-private counterparts.Show less >
Language :
Anglais
Comment :
18 pages
Collections :
  • Centre de Recherche en Informatique, Signal et Automatique de Lille (CRIStAL) - UMR 9189
Source :
Harvested from HAL
Files
Thumbnail
  • document
  • Open access
  • Access the document
Thumbnail
  • paper.pdf
  • Open access
  • Access the document
Thumbnail
  • 2210.16242
  • Open access
  • Access the document
Université de Lille

Mentions légales
Université de Lille © 2017