• English
    • français
  • Help
  •  | 
  • Contact
  •  | 
  • About
  •  | 
  • Login
  • HAL portal
  •  | 
  • Pages Pro
  • EN
  •  / 
  • FR
View Item 
  •   LillOA Home
  • Liste des unités
  • Centre de Recherche en Informatique, Signal et Automatique de Lille (CRIStAL) - UMR 9189
  • View Item
  •   LillOA Home
  • Liste des unités
  • Centre de Recherche en Informatique, Signal et Automatique de Lille (CRIStAL) - UMR 9189
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Differential Privacy for Bayesian Inference ...
  • BibTeX
  • CSV
  • Excel
  • RIS

Document type :
Article dans une revue scientifique
Title :
Differential Privacy for Bayesian Inference through Posterior Sampling
Author(s) :
Dimitrakakis, Christos [Auteur]
Sequential Learning [SEQUEL]
Université de Lille, Sciences Humaines et Sociales
Harvard University [Cambridge]
Nelson, Blaine [Auteur]
Google Inc [Mountain View]
Zhang, Zuhe [Auteur]
University of Melbourne
Mitrokotsa, Aikateirni [Auteur]
Rubinstein, Benjamin [Auteur]
University of Melbourne
Journal title :
Journal of Machine Learning Research
Pages :
1−39
Publisher :
Microtome Publishing
Publication date :
2017-04
ISSN :
1532-4435
English keyword(s) :
Bayesian inference
differential privacy
robustness
adversarial Learning
HAL domain(s) :
Statistiques [stat]/Machine Learning [stat.ML]
Informatique [cs]/Cryptographie et sécurité [cs.CR]
Mathématiques [math]/Statistiques [math.ST]
English abstract : [en]
Differential privacy formalises privacy-preserving mechanisms that provide access to a database. Can Bayesian inference be used directly to provide private access to data? The answer is yes: under certain conditions on the ...
Show more >
Differential privacy formalises privacy-preserving mechanisms that provide access to a database. Can Bayesian inference be used directly to provide private access to data? The answer is yes: under certain conditions on the prior, sampling from the posterior distribution can lead to a desired level of privacy and utility. For a uniform treatment, we define differential privacy over arbitrary data set metrics, outcome spaces and distribution families. This allows us to also deal with non-i.i.d or non-tabular data sets. We then prove bounds on the sensitivity of the posterior to the data, which delivers a measure of robustness. We also show how to use posterior sampling to provide differentially private responses to queries, within a decision-theoretic framework. Finally, we provide bounds on the utility of answers to queries and on the ability of an adversary to distinguish between data sets. The latter are complemented by a novel use of Le Cam's method to obtain lower bounds on distinguishability. Our results hold for arbitrary metrics, including those for the common definition of differential privacy. For specific choices of the metric, we give a number of examples satisfying our assumptions. *. A preliminary version of this paper appeared in Algorithmic Learning Theory 2014 (Dimitrakakis et al., 2014). This version corrects proofs, constant factors in the upper bounds and introduces new material on utility analysis, lower bounds and examples.Show less >
Language :
Anglais
Peer reviewed article :
Oui
Audience :
Internationale
Popular science :
Non
Collections :
  • Centre de Recherche en Informatique, Signal et Automatique de Lille (CRIStAL) - UMR 9189
Source :
Harvested from HAL
Files
Thumbnail
  • https://hal.inria.fr/hal-01500302/document
  • Open access
  • Access the document
Thumbnail
  • https://hal.inria.fr/hal-01500302/document
  • Open access
  • Access the document
Thumbnail
  • https://hal.inria.fr/hal-01500302/document
  • Open access
  • Access the document
Thumbnail
  • https://hal.inria.fr/hal-01500302/document
  • Open access
  • Access the document
Université de Lille

Mentions légales
Université de Lille © 2017