High-Dimensional Private Empirical Risk ...
Type de document :
Pré-publication ou Document de travail
Titre :
High-Dimensional Private Empirical Risk Minimization by Greedy Coordinate Descent
Auteur(s) :
Mangold, Paul [Auteur]
Machine Learning in Information Networks [MAGNET]
Bellet, Aurelien [Auteur]
Machine Learning in Information Networks [MAGNET]
Salmon, Joseph [Auteur]
Scientific Data Management [ZENITH]
Institut universitaire de France [IUF]
Institut Montpelliérain Alexander Grothendieck [IMAG]
Tommasi, Marc [Auteur]
Machine Learning in Information Networks [MAGNET]
Machine Learning in Information Networks [MAGNET]
Bellet, Aurelien [Auteur]

Machine Learning in Information Networks [MAGNET]
Salmon, Joseph [Auteur]
Scientific Data Management [ZENITH]
Institut universitaire de France [IUF]
Institut Montpelliérain Alexander Grothendieck [IMAG]
Tommasi, Marc [Auteur]

Machine Learning in Information Networks [MAGNET]
Discipline(s) HAL :
Informatique [cs]/Apprentissage [cs.LG]
Informatique [cs]/Cryptographie et sécurité [cs.CR]
Statistiques [stat]/Machine Learning [stat.ML]
Informatique [cs]/Cryptographie et sécurité [cs.CR]
Statistiques [stat]/Machine Learning [stat.ML]
Résumé en anglais : [en]
In this paper, we study differentially private empirical risk minimization (DP-ERM). It has been shown that the worst-case utility of DP-ERM reduces polynomially as the dimension increases. This is a major obstacle to ...
Lire la suite >In this paper, we study differentially private empirical risk minimization (DP-ERM). It has been shown that the worst-case utility of DP-ERM reduces polynomially as the dimension increases. This is a major obstacle to privately learning large machine learning models. In high dimension, it is common for some model's parameters to carry more information than others. To exploit this, we propose a differentially private greedy coordinate descent (DP-GCD) algorithm. At each iteration, DP-GCD privately performs a coordinate-wise gradient step along the gradients' (approximately) greatest entry. We show theoretically that DP-GCD can achieve a logarithmic dependence on the dimension for a wide range of problems by naturally exploiting their structural properties (such as quasi-sparse solutions). We illustrate this behavior numerically, both on synthetic and real datasets.Lire moins >
Lire la suite >In this paper, we study differentially private empirical risk minimization (DP-ERM). It has been shown that the worst-case utility of DP-ERM reduces polynomially as the dimension increases. This is a major obstacle to privately learning large machine learning models. In high dimension, it is common for some model's parameters to carry more information than others. To exploit this, we propose a differentially private greedy coordinate descent (DP-GCD) algorithm. At each iteration, DP-GCD privately performs a coordinate-wise gradient step along the gradients' (approximately) greatest entry. We show theoretically that DP-GCD can achieve a logarithmic dependence on the dimension for a wide range of problems by naturally exploiting their structural properties (such as quasi-sparse solutions). We illustrate this behavior numerically, both on synthetic and real datasets.Lire moins >
Langue :
Anglais
Projet ANR :
Collections :
Source :
Fichiers
- https://hal.inria.fr/hal-03714465v2/document
- Accès libre
- Accéder au document
- https://hal.inria.fr/hal-03714465v2/document
- Accès libre
- Accéder au document