Confidential-DPproof: Confidential Proof ...
Type de document :
Communication dans un congrès avec actes
Titre :
Confidential-DPproof: Confidential Proof of Differentially Private Training
Auteur(s) :
Shamsabadi, Ali Shahin [Auteur]
Brave Software
Tan, Gefei [Auteur]
Northwestern University [Evanston]
Cebere, Tudor [Auteur]
Machine Learning in Information Networks [MAGNET]
Bellet, Aurelien [Auteur]
Institut Desbrest de santé publique [IDESP]
Médecine de précision par intégration de données et inférence causale [PREMEDICAL]
Haddadi, Hamed [Auteur]
Brave Software
Imperial College London
Papernot, Nicolas [Auteur]
Department of Computer Science [University of Toronto] [DCS]
Vector Institute
Wang, Xiao [Auteur]
Northwestern University [Evanston]
Weller, Adrian [Auteur]
The Alan Turing Institute
University of Cambridge [UK] [CAM]
Brave Software
Tan, Gefei [Auteur]
Northwestern University [Evanston]
Cebere, Tudor [Auteur]
Machine Learning in Information Networks [MAGNET]
Bellet, Aurelien [Auteur]

Institut Desbrest de santé publique [IDESP]
Médecine de précision par intégration de données et inférence causale [PREMEDICAL]
Haddadi, Hamed [Auteur]
Brave Software
Imperial College London
Papernot, Nicolas [Auteur]
Department of Computer Science [University of Toronto] [DCS]
Vector Institute
Wang, Xiao [Auteur]
Northwestern University [Evanston]
Weller, Adrian [Auteur]
The Alan Turing Institute
University of Cambridge [UK] [CAM]
Titre de la manifestation scientifique :
ICLR 2024 - 12th International Conference on Learning Representations
Ville :
Vienna (Austria)
Pays :
Autriche
Date de début de la manifestation scientifique :
2024-05-07
Date de publication :
2024
Discipline(s) HAL :
Informatique [cs]/Apprentissage [cs.LG]
Résumé en anglais : [en]
Post hoc privacy auditing techniques can be used to test the privacy guarantees of a model, but come with several limitations: (i) they can only establish lower bounds on the privacy loss, (ii) the intermediate model updates ...
Lire la suite >Post hoc privacy auditing techniques can be used to test the privacy guarantees of a model, but come with several limitations: (i) they can only establish lower bounds on the privacy loss, (ii) the intermediate model updates and some data must beshared with the auditor to get a better approximation of the privacy loss, and (iii) the auditor typically faces a steep computational cost to run a large number of attacks. In this paper, we propose to proactively generate a cryptographic certificate of privacy during training to forego such auditing limitations. We introduce Confidential-DPproof , a framework for Confidential Proof of Differentially Private Training, which enhances training with a certificate of the (ε, δ)-DP guarantee achieved. To obtain this certificate without revealing information about the training data or model, we design a customized zero-knowledge proof protocol tailored to the requirements introduced by differentially private training, including random noise addition and privacy amplification by subsampling. In experiments on CIFAR-10, Confidential-DPproof trains a model achieving state-of-the-art 91% test accuracy with a certified privacy guarantee of (ε = 0.55, δ = 10−5)-DP in approximately 100 hours.Lire moins >
Lire la suite >Post hoc privacy auditing techniques can be used to test the privacy guarantees of a model, but come with several limitations: (i) they can only establish lower bounds on the privacy loss, (ii) the intermediate model updates and some data must beshared with the auditor to get a better approximation of the privacy loss, and (iii) the auditor typically faces a steep computational cost to run a large number of attacks. In this paper, we propose to proactively generate a cryptographic certificate of privacy during training to forego such auditing limitations. We introduce Confidential-DPproof , a framework for Confidential Proof of Differentially Private Training, which enhances training with a certificate of the (ε, δ)-DP guarantee achieved. To obtain this certificate without revealing information about the training data or model, we design a customized zero-knowledge proof protocol tailored to the requirements introduced by differentially private training, including random noise addition and privacy amplification by subsampling. In experiments on CIFAR-10, Confidential-DPproof trains a model achieving state-of-the-art 91% test accuracy with a certified privacy guarantee of (ε = 0.55, δ = 10−5)-DP in approximately 100 hours.Lire moins >
Langue :
Anglais
Comité de lecture :
Oui
Audience :
Internationale
Vulgarisation :
Non
Projet ANR :
Collections :
Source :
Fichiers
- document
- Accès libre
- Accéder au document
- 8068_confidential_dpproof_confident.pdf
- Accès libre
- Accéder au document
- document
- Accès libre
- Accéder au document
- 8068_confidential_dpproof_confident.pdf
- Accès libre
- Accéder au document