• English
    • français
  • Help
  •  | 
  • Contact
  •  | 
  • About
  •  | 
  • Login
  • HAL portal
  •  | 
  • Pages Pro
  • EN
  •  / 
  • FR
View Item 
  •   LillOA Home
  • Liste des unités
  • Centre de Recherche en Informatique, Signal et Automatique de Lille (CRIStAL) - UMR 9189
  • View Item
  •   LillOA Home
  • Liste des unités
  • Centre de Recherche en Informatique, Signal et Automatique de Lille (CRIStAL) - UMR 9189
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Hiding in the Crowd: A Massively Distributed ...
  • BibTeX
  • CSV
  • Excel
  • RIS

Document type :
Rapport de recherche
Title :
Hiding in the Crowd: A Massively Distributed Algorithm for Private Averaging with Malicious Adversaries
Author(s) :
Dellenbach, Pierre [Auteur]
Machine Learning in Information Networks [MAGNET]
Bellet, Aurelien [Auteur] refId
Machine Learning in Information Networks [MAGNET]
Ramon, Jan [Auteur] refId
Machine Learning in Information Networks [MAGNET]
Institution :
Inria
Publication date :
2018
HAL domain(s) :
Informatique [cs]/Apprentissage [cs.LG]
Statistiques [stat]/Machine Learning [stat.ML]
English abstract : [en]
The amount of personal data collected in our everyday interactions with connected devices offers great opportunities for innovative services fueled by machine learning, as well as raises serious concerns for the privacy ...
Show more >
The amount of personal data collected in our everyday interactions with connected devices offers great opportunities for innovative services fueled by machine learning, as well as raises serious concerns for the privacy of individuals. In this paper, we propose a massively distributed protocol for a large set of users to privately compute averages over their joint data, which can then be used to learn predictive models. Our protocol can find a solution of arbitrary accuracy, does not rely on a third party and preserves the privacy of users throughout the execution in both the honest-but-curious and malicious adversary models. Specifically, we prove that the information observed by the adversary (the set of maliciours users) does not significantly reduce the uncertainty in its prediction of private values compared to its prior belief. The level of privacy protection depends on a quantity related to the Laplacian matrix of the network graph and generally improves with the size of the graph. Furthermore, we design a verification procedure which offers protection against malicious users joining the service with the goal of manipulating the outcome of the algorithm.Show less >
Language :
Anglais
Collections :
  • Centre de Recherche en Informatique, Signal et Automatique de Lille (CRIStAL) - UMR 9189
Source :
Harvested from HAL
Files
Thumbnail
  • https://hal.inria.fr/hal-01923000/document
  • Open access
  • Access the document
Thumbnail
  • https://hal.inria.fr/hal-01923000/document
  • Open access
  • Access the document
Thumbnail
  • https://hal.inria.fr/hal-01923000/document
  • Open access
  • Access the document
Université de Lille

Mentions légales
Université de Lille © 2017