Échange de bruit corrélé pour le calcul ...
Type de document :
Autre communication scientifique (congrès sans actes - poster - séminaire...)
Titre :
Échange de bruit corrélé pour le calcul distribué de moyenne avec garanties de confidentialité différentielle
Auteur(s) :
Sabater, César [Auteur]
Machine Learning in Information Networks [MAGNET]
Bellet, Aurelien [Auteur]
Machine Learning in Information Networks [MAGNET]
Ramon, Jan [Auteur]
Machine Learning in Information Networks [MAGNET]
Machine Learning in Information Networks [MAGNET]
Bellet, Aurelien [Auteur]
Machine Learning in Information Networks [MAGNET]
Ramon, Jan [Auteur]
Machine Learning in Information Networks [MAGNET]
Titre de la manifestation scientifique :
Conférence sur l'Apprentissage Automatique 2020
Ville :
Vannes (Virtual)
Pays :
France
Date de début de la manifestation scientifique :
2020-06-23
Discipline(s) HAL :
Informatique [cs]/Apprentissage [cs.LG]
Statistiques [stat]/Machine Learning [stat.ML]
Statistiques [stat]/Machine Learning [stat.ML]
Résumé en anglais : [en]
The amount of personal data collected in our everyday interactions with connected devices offers great opportunities for innovative services fueled by machine learning, as well as raises serious concerns for the privacy ...
Lire la suite >The amount of personal data collected in our everyday interactions with connected devices offers great opportunities for innovative services fueled by machine learning, as well as raises serious concerns for the privacy of individuals. In this paper, we propose a differentially private protocol allowing a large set of users to compute the average of their local values. In contrast to existing work, our protocol does not rely on a third party or costly cryptographic primitives: we use simple pairwise exchanges of correlated Gaussian noise along the edges of a network graph. We analyze the differential privacy guarantees of our protocol and the role of the correlated noise, and show that we can match the accuracy of the trusted curator model. Furthermore, we design a verification procedure based on additively homomorphic commitments which offers protection against malicious users joining the service with the goal of manipulating the outcome of the algorithm.Lire moins >
Lire la suite >The amount of personal data collected in our everyday interactions with connected devices offers great opportunities for innovative services fueled by machine learning, as well as raises serious concerns for the privacy of individuals. In this paper, we propose a differentially private protocol allowing a large set of users to compute the average of their local values. In contrast to existing work, our protocol does not rely on a third party or costly cryptographic primitives: we use simple pairwise exchanges of correlated Gaussian noise along the edges of a network graph. We analyze the differential privacy guarantees of our protocol and the role of the correlated noise, and show that we can match the accuracy of the trusted curator model. Furthermore, we design a verification procedure based on additively homomorphic commitments which offers protection against malicious users joining the service with the goal of manipulating the outcome of the algorithm.Lire moins >
Langue :
Anglais
Comité de lecture :
Oui
Audience :
Internationale
Vulgarisation :
Non
Projet ANR :
Collections :
Source :
Fichiers
- http://arxiv.org/pdf/2006.07218
- Accès libre
- Accéder au document
- 2006.07218
- Accès libre
- Accéder au document