Confidential-DPproof: Confidential Proof ...
Document type :
Communication dans un congrès avec actes
Title :
Confidential-DPproof: Confidential Proof of Differentially Private Training
Author(s) :
Shamsabadi, Ali Shahin [Auteur]
Brave Software
Tan, Gefei [Auteur]
Northwestern University [Evanston]
Cebere, Tudor [Auteur]
Machine Learning in Information Networks [MAGNET]
Bellet, Aurelien [Auteur]
Institut Desbrest de santé publique [IDESP]
Médecine de précision par intégration de données et inférence causale [PREMEDICAL]
Haddadi, Hamed [Auteur]
Brave Software
Imperial College London
Papernot, Nicolas [Auteur]
Department of Computer Science [University of Toronto] [DCS]
Vector Institute
Wang, Xiao [Auteur]
Northwestern University [Evanston]
Weller, Adrian [Auteur]
The Alan Turing Institute
University of Cambridge [UK] [CAM]
Brave Software
Tan, Gefei [Auteur]
Northwestern University [Evanston]
Cebere, Tudor [Auteur]
Machine Learning in Information Networks [MAGNET]
Bellet, Aurelien [Auteur]

Institut Desbrest de santé publique [IDESP]
Médecine de précision par intégration de données et inférence causale [PREMEDICAL]
Haddadi, Hamed [Auteur]
Brave Software
Imperial College London
Papernot, Nicolas [Auteur]
Department of Computer Science [University of Toronto] [DCS]
Vector Institute
Wang, Xiao [Auteur]
Northwestern University [Evanston]
Weller, Adrian [Auteur]
The Alan Turing Institute
University of Cambridge [UK] [CAM]
Conference title :
ICLR 2024 - 12th International Conference on Learning Representations
City :
Vienna (Austria)
Country :
Autriche
Start date of the conference :
2024-05-07
Publication date :
2024
HAL domain(s) :
Informatique [cs]/Apprentissage [cs.LG]
English abstract : [en]
Post hoc privacy auditing techniques can be used to test the privacy guarantees of a model, but come with several limitations: (i) they can only establish lower bounds on the privacy loss, (ii) the intermediate model updates ...
Show more >Post hoc privacy auditing techniques can be used to test the privacy guarantees of a model, but come with several limitations: (i) they can only establish lower bounds on the privacy loss, (ii) the intermediate model updates and some data must beshared with the auditor to get a better approximation of the privacy loss, and (iii) the auditor typically faces a steep computational cost to run a large number of attacks. In this paper, we propose to proactively generate a cryptographic certificate of privacy during training to forego such auditing limitations. We introduce Confidential-DPproof , a framework for Confidential Proof of Differentially Private Training, which enhances training with a certificate of the (ε, δ)-DP guarantee achieved. To obtain this certificate without revealing information about the training data or model, we design a customized zero-knowledge proof protocol tailored to the requirements introduced by differentially private training, including random noise addition and privacy amplification by subsampling. In experiments on CIFAR-10, Confidential-DPproof trains a model achieving state-of-the-art 91% test accuracy with a certified privacy guarantee of (ε = 0.55, δ = 10−5)-DP in approximately 100 hours.Show less >
Show more >Post hoc privacy auditing techniques can be used to test the privacy guarantees of a model, but come with several limitations: (i) they can only establish lower bounds on the privacy loss, (ii) the intermediate model updates and some data must beshared with the auditor to get a better approximation of the privacy loss, and (iii) the auditor typically faces a steep computational cost to run a large number of attacks. In this paper, we propose to proactively generate a cryptographic certificate of privacy during training to forego such auditing limitations. We introduce Confidential-DPproof , a framework for Confidential Proof of Differentially Private Training, which enhances training with a certificate of the (ε, δ)-DP guarantee achieved. To obtain this certificate without revealing information about the training data or model, we design a customized zero-knowledge proof protocol tailored to the requirements introduced by differentially private training, including random noise addition and privacy amplification by subsampling. In experiments on CIFAR-10, Confidential-DPproof trains a model achieving state-of-the-art 91% test accuracy with a certified privacy guarantee of (ε = 0.55, δ = 10−5)-DP in approximately 100 hours.Show less >
Language :
Anglais
Peer reviewed article :
Oui
Audience :
Internationale
Popular science :
Non
ANR Project :
Collections :
Source :
Files
- document
- Open access
- Access the document
- 8068_confidential_dpproof_confident.pdf
- Open access
- Access the document
- document
- Open access
- Access the document
- 8068_confidential_dpproof_confident.pdf
- Open access
- Access the document