• English
    • français
  • Aide
  •  | 
  • Contact
  •  | 
  • À Propos
  •  | 
  • Ouvrir une session
  • Portail HAL
  •  | 
  • Pages Pro Chercheurs
  • EN
  •  / 
  • FR
Voir le document 
  •   Accueil de LillOA
  • Liste des unités
  • Centre de Recherche en Informatique, Signal et Automatique de Lille (CRIStAL) - UMR 9189
  • Voir le document
  •   Accueil de LillOA
  • Liste des unités
  • Centre de Recherche en Informatique, Signal et Automatique de Lille (CRIStAL) - UMR 9189
  • Voir le document
JavaScript is disabled for your browser. Some features of this site may not work without it.

Privacy Amplification by Decentralization
  • BibTeX
  • CSV
  • Excel
  • RIS

Type de document :
Pré-publication ou Document de travail
URL permanente :
http://hdl.handle.net/20.500.12210/57898
Titre :
Privacy Amplification by Decentralization
Auteur(s) :
Cyffers, Edwige [Auteur]
Machine Learning in Information Networks [MAGNET]
Bellet, Aurelien [Auteur] refId
Machine Learning in Information Networks [MAGNET]
Discipline(s) HAL :
Informatique [cs]/Apprentissage [cs.LG]
Statistiques [stat]/Machine Learning [stat.ML]
Résumé en anglais : [en]
Analyzing data owned by several parties while achieving a good trade-off between utility and privacy is a key challenge in federated learning and analytics. In this work, we introduce a novel relaxation of local differential ...
Lire la suite >
Analyzing data owned by several parties while achieving a good trade-off between utility and privacy is a key challenge in federated learning and analytics. In this work, we introduce a novel relaxation of local differential privacy (LDP) that naturally arises in fully decentralized algorithms, i.e., when participants exchange information by communicating along the edges of a network graph without central coordinator. This relaxation, that we call network DP, captures the fact that users have only a local view of the system. To show the relevance of network DP, we study a decentralized model of computation where a token performs a walk on the network graph and is updated sequentially by the party who receives it. For tasks such as real summation, histogram computation and optimization with gradient descent, we propose simple algorithms on ring and complete topologies. We prove that the privacy-utility trade-offs of our algorithms under network DP significantly improve upon what is achievable under LDP (sometimes even matching the utility of the trusted curator model), showing for the first time that formal privacy gains can be obtained from full decentralization. Our experiments illustrate the improved utility of our approach for decentralized training with stochastic gradient descent.Lire moins >
Langue :
Anglais
Projet ANR :
Apprentissage automatique décentralisé et personnalisé sous contraintes
Apprentissage distribué, personnalisé, préservant la privacité pour le traitement de la parole
Apprentissage automatique décentralisé et préservant la vie privée
Collections :
  • Centre de Recherche en Informatique, Signal et Automatique de Lille (CRIStAL) - UMR 9189
Source :
Harvested from HAL
Date de dépôt :
2021-11-20T02:00:46Z
Fichiers
Thumbnail
  • https://hal.inria.fr/hal-03100005v3/document
  • Accès libre
  • Accéder au document
Thumbnail
  • http://arxiv.org/pdf/2012.05326
  • Accès libre
  • Accéder au document
Université de Lille

Mentions légales
Université de Lille © 2017