From Noisy Fixed-Point Iterations to Private ...
Type de document :
Communication dans un congrès avec actes
Titre :
From Noisy Fixed-Point Iterations to Private ADMM for Centralized and Federated Learning
Auteur(s) :
Cyffers, Edwige [Auteur]
Université Paris 1 Panthéon-Sorbonne - UFR Philosophie [UP1 UFR10]
Machine Learning in Information Networks [MAGNET]
Bellet, Aurélien [Auteur]
Machine Learning in Information Networks [MAGNET]
Basu, Debabrota [Auteur]
Scool [Scool]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Inria Lille - Nord Europe
Université Paris 1 Panthéon-Sorbonne - UFR Philosophie [UP1 UFR10]
Machine Learning in Information Networks [MAGNET]
Bellet, Aurélien [Auteur]
Machine Learning in Information Networks [MAGNET]
Basu, Debabrota [Auteur]
Scool [Scool]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Inria Lille - Nord Europe
Titre de la manifestation scientifique :
Proceedings of the 40th International Conference on Machine Learning (ICML)
Ville :
Honolulu
Pays :
Etats-Unis d'Amérique
Date de début de la manifestation scientifique :
2023-07
Discipline(s) HAL :
Informatique [cs]/Apprentissage [cs.LG]
Informatique [cs]/Cryptographie et sécurité [cs.CR]
Informatique [cs]/Calcul parallèle, distribué et partagé [cs.DC]
Mathématiques [math]/Optimisation et contrôle [math.OC]
Statistiques [stat]/Machine Learning [stat.ML]
Informatique [cs]/Cryptographie et sécurité [cs.CR]
Informatique [cs]/Calcul parallèle, distribué et partagé [cs.DC]
Mathématiques [math]/Optimisation et contrôle [math.OC]
Statistiques [stat]/Machine Learning [stat.ML]
Résumé en anglais : [en]
We study differentially private (DP) machine learning algorithms as instances of noisy fixed-point iterations, in order to derive privacy and utility results from this well-studied framework. We show that this new perspective ...
Lire la suite >We study differentially private (DP) machine learning algorithms as instances of noisy fixed-point iterations, in order to derive privacy and utility results from this well-studied framework. We show that this new perspective recovers popular private gradient-based methods like DP-SGD and provides a principled way to design and analyze new private optimization algorithms in a flexible manner. Focusing on the widely-used Alternating Directions Method of Multipliers (ADMM) method, we use our general framework to derive novel private ADMM algorithms for centralized, federated and fully decentralized learning. For these three algorithms, we establish strong privacy guarantees leveraging privacy amplification by iteration and by subsampling. Finally, we provide utility guarantees using a unified analysis that exploits a recent linear convergence result for noisy fixed-point iterations.Lire moins >
Lire la suite >We study differentially private (DP) machine learning algorithms as instances of noisy fixed-point iterations, in order to derive privacy and utility results from this well-studied framework. We show that this new perspective recovers popular private gradient-based methods like DP-SGD and provides a principled way to design and analyze new private optimization algorithms in a flexible manner. Focusing on the widely-used Alternating Directions Method of Multipliers (ADMM) method, we use our general framework to derive novel private ADMM algorithms for centralized, federated and fully decentralized learning. For these three algorithms, we establish strong privacy guarantees leveraging privacy amplification by iteration and by subsampling. Finally, we provide utility guarantees using a unified analysis that exploits a recent linear convergence result for noisy fixed-point iterations.Lire moins >
Langue :
Anglais
Comité de lecture :
Oui
Audience :
Internationale
Vulgarisation :
Non
Projet ANR :
Collections :
Source :
Fichiers
- 2302.12559
- Accès libre
- Accéder au document