High-Dimensional Private Empirical Risk ...
Document type :
Communication dans un congrès avec actes
Title :
High-Dimensional Private Empirical Risk Minimization by Greedy Coordinate Descent
Author(s) :
Mangold, Paul [Auteur]
Machine Learning in Information Networks [MAGNET]
Bellet, Aurelien [Auteur]
Machine Learning in Information Networks [MAGNET]
Salmon, Joseph [Auteur]
Scientific Data Management [ZENITH]
Institut universitaire de France [IUF]
Institut Montpelliérain Alexander Grothendieck [IMAG]
Tommasi, Marc [Auteur]
Machine Learning in Information Networks [MAGNET]
Machine Learning in Information Networks [MAGNET]
Bellet, Aurelien [Auteur]

Machine Learning in Information Networks [MAGNET]
Salmon, Joseph [Auteur]
Scientific Data Management [ZENITH]
Institut universitaire de France [IUF]
Institut Montpelliérain Alexander Grothendieck [IMAG]
Tommasi, Marc [Auteur]

Machine Learning in Information Networks [MAGNET]
Conference title :
AISTATS 2023 - International Conference on Artificial Intelligence and Statistics
City :
Valencia
Country :
Espagne
Start date of the conference :
2023-04-25
Book title :
Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, PMLR
HAL domain(s) :
Informatique [cs]/Apprentissage [cs.LG]
Informatique [cs]/Cryptographie et sécurité [cs.CR]
Statistiques [stat]/Machine Learning [stat.ML]
Informatique [cs]/Cryptographie et sécurité [cs.CR]
Statistiques [stat]/Machine Learning [stat.ML]
English abstract : [en]
In this paper, we study differentially private empirical risk minimization (DP-ERM). It has been shown that the worst-case utility of DP-ERM reduces polynomially as the dimension increases. This is a major obstacle to ...
Show more >In this paper, we study differentially private empirical risk minimization (DP-ERM). It has been shown that the worst-case utility of DP-ERM reduces polynomially as the dimension increases. This is a major obstacle to privately learning large machine learning models. In high dimension, it is common for some model's parameters to carry more information than others. To exploit this, we propose a differentially private greedy coordinate descent (DP-GCD) algorithm. At each iteration, DP-GCD privately performs a coordinate-wise gradient step along the gradients' (approximately) greatest entry. We show theoretically that DP-GCD can achieve a logarithmic dependence on the dimension for a wide range of problems by naturally exploiting their structural properties (such as quasi-sparse solutions). We illustrate this behavior numerically, both on synthetic and real datasets.Show less >
Show more >In this paper, we study differentially private empirical risk minimization (DP-ERM). It has been shown that the worst-case utility of DP-ERM reduces polynomially as the dimension increases. This is a major obstacle to privately learning large machine learning models. In high dimension, it is common for some model's parameters to carry more information than others. To exploit this, we propose a differentially private greedy coordinate descent (DP-GCD) algorithm. At each iteration, DP-GCD privately performs a coordinate-wise gradient step along the gradients' (approximately) greatest entry. We show theoretically that DP-GCD can achieve a logarithmic dependence on the dimension for a wide range of problems by naturally exploiting their structural properties (such as quasi-sparse solutions). We illustrate this behavior numerically, both on synthetic and real datasets.Show less >
Language :
Anglais
Peer reviewed article :
Oui
Audience :
Internationale
Popular science :
Non
ANR Project :
Collections :
Source :
Files
- document
- Open access
- Access the document
- paper.pdf
- Open access
- Access the document