Differentially Private Decentralized ...
Type de document :
Communication dans un congrès avec actes
Titre :
Differentially Private Decentralized Learning with Random Walks
Auteur(s) :
Cyffers, Edwige [Auteur]
Machine Learning in Information Networks [MAGNET]
Université de Lille
Bellet, Aurelien [Auteur]
Médecine de précision par intégration de données et inférence causale [PREMEDICAL]
Université de Montpellier [UM]
Upadhyay, Jalaj [Auteur]
Department of Computer Science [Rutgers]
Machine Learning in Information Networks [MAGNET]
Université de Lille
Bellet, Aurelien [Auteur]
![refId](/themes/Mirage2//images/idref.png)
Médecine de précision par intégration de données et inférence causale [PREMEDICAL]
Université de Montpellier [UM]
Upadhyay, Jalaj [Auteur]
Department of Computer Science [Rutgers]
Titre de la manifestation scientifique :
ICML 2024 - Forty-first International Conference on Machine Learning
Ville :
Vienne (Autriche)
Pays :
Autriche
Date de début de la manifestation scientifique :
2024-07-21
Éditeur :
arXiv
Date de publication :
2024
Mot(s)-clé(s) en anglais :
Machine Learning (cs.LG)
Cryptography and Security (cs.CR)
FOS: Computer and information sciences
Cryptography and Security (cs.CR)
FOS: Computer and information sciences
Discipline(s) HAL :
Informatique [cs]
Résumé en anglais : [en]
The popularity of federated learning comes from the possibility of better scalability and the ability for participants to keep control of their data, improving data security and sovereignty. Unfortunately, sharing model ...
Lire la suite >The popularity of federated learning comes from the possibility of better scalability and the ability for participants to keep control of their data, improving data security and sovereignty. Unfortunately, sharing model updates also creates a new privacy attack surface. In this work, we characterize the privacy guarantees of decentralized learning with random walk algorithms, where a model is updated by traveling from one node to another along the edges of a communication graph. Using a recent variant of differential privacy tailored to the study of decentralized algorithms, namely Pairwise Network Differential Privacy, we derive closed-form expressions for the privacy loss between each pair of nodes where the impact of the communication topology is captured by graph theoretic quantities. Our results further reveal that random walk algorithms tends to yield better privacy guarantees than gossip algorithms for nodes close from each other. We supplement our theoretical results with empirical evaluation on synthetic and real-world graphs and datasets.Lire moins >
Lire la suite >The popularity of federated learning comes from the possibility of better scalability and the ability for participants to keep control of their data, improving data security and sovereignty. Unfortunately, sharing model updates also creates a new privacy attack surface. In this work, we characterize the privacy guarantees of decentralized learning with random walk algorithms, where a model is updated by traveling from one node to another along the edges of a communication graph. Using a recent variant of differential privacy tailored to the study of decentralized algorithms, namely Pairwise Network Differential Privacy, we derive closed-form expressions for the privacy loss between each pair of nodes where the impact of the communication topology is captured by graph theoretic quantities. Our results further reveal that random walk algorithms tends to yield better privacy guarantees than gossip algorithms for nodes close from each other. We supplement our theoretical results with empirical evaluation on synthetic and real-world graphs and datasets.Lire moins >
Langue :
Anglais
Comité de lecture :
Oui
Audience :
Internationale
Vulgarisation :
Non
Projet ANR :
Collections :
Source :
Fichiers
- 2402.07471
- Accès libre
- Accéder au document