GAP: Differentially Private Graph Neural ...
Document type :
Communication dans un congrès avec actes
Title :
GAP: Differentially Private Graph Neural Networks with Aggregation Perturbation
Author(s) :
Sajadmanesh, Sina [Auteur]
Ecole Polytechnique Fédérale de Lausanne [EPFL]
IDIAP Research Institute
Shamsabadi, Ali Shahin [Auteur]
The Alan Turing Institute
Bellet, Aurelien [Auteur]
Machine Learning in Information Networks [MAGNET]
Gatica-Perez, Daniel [Auteur]
Ecole Polytechnique Fédérale de Lausanne [EPFL]
IDIAP Research Institute
Ecole Polytechnique Fédérale de Lausanne [EPFL]
IDIAP Research Institute
Shamsabadi, Ali Shahin [Auteur]
The Alan Turing Institute
Bellet, Aurelien [Auteur]

Machine Learning in Information Networks [MAGNET]
Gatica-Perez, Daniel [Auteur]
Ecole Polytechnique Fédérale de Lausanne [EPFL]
IDIAP Research Institute
Conference title :
USENIX Security 2023 - 32nd USENIX Security Symposium
City :
Anaheim
Country :
Etats-Unis d'Amérique
Start date of the conference :
2023-08-09
HAL domain(s) :
Informatique [cs]/Apprentissage [cs.LG]
Statistiques [stat]/Machine Learning [stat.ML]
Statistiques [stat]/Machine Learning [stat.ML]
English abstract : [en]
In this paper, we study the problem of learning Graph Neural Networks (GNNs) with Differential Privacy (DP). We propose a novel differentially private GNN based on Aggregation Perturbation (GAP), which adds stochastic noise ...
Show more >In this paper, we study the problem of learning Graph Neural Networks (GNNs) with Differential Privacy (DP). We propose a novel differentially private GNN based on Aggregation Perturbation (GAP), which adds stochastic noise to the GNN's aggregation function to statistically obfuscate the presence of a single edge (edge-level privacy) or a single node and all its adjacent edges (node-level privacy). Tailored to the specifics of private learning, GAP's new architecture is composed of three separate modules: (i) the encoder module, where we learn private node embeddings without relying on the edge information; (ii) the aggregation module, where we compute noisy aggregated node embeddings based on the graph structure; and (iii) the classification module, where we train a neural network on the private aggregations for node classification without further querying the graph edges. GAP's major advantage over previous approaches is that it can benefit from multi-hop neighborhood aggregations, and guarantees both edge-level and node-level DP not only for training, but also at inference with no additional costs beyond the training's privacy budget. We analyze GAP's formal privacy guarantees using Rényi DP and conduct empirical experiments over three real-world graph datasets. We demonstrate that GAP offers significantly better accuracy-privacy trade-offs than state-of-the-art DP-GNN approaches and naive MLP-based baselines. Our code is publicly available at https://github.com/sisaman/GAP.Show less >
Show more >In this paper, we study the problem of learning Graph Neural Networks (GNNs) with Differential Privacy (DP). We propose a novel differentially private GNN based on Aggregation Perturbation (GAP), which adds stochastic noise to the GNN's aggregation function to statistically obfuscate the presence of a single edge (edge-level privacy) or a single node and all its adjacent edges (node-level privacy). Tailored to the specifics of private learning, GAP's new architecture is composed of three separate modules: (i) the encoder module, where we learn private node embeddings without relying on the edge information; (ii) the aggregation module, where we compute noisy aggregated node embeddings based on the graph structure; and (iii) the classification module, where we train a neural network on the private aggregations for node classification without further querying the graph edges. GAP's major advantage over previous approaches is that it can benefit from multi-hop neighborhood aggregations, and guarantees both edge-level and node-level DP not only for training, but also at inference with no additional costs beyond the training's privacy budget. We analyze GAP's formal privacy guarantees using Rényi DP and conduct empirical experiments over three real-world graph datasets. We demonstrate that GAP offers significantly better accuracy-privacy trade-offs than state-of-the-art DP-GNN approaches and naive MLP-based baselines. Our code is publicly available at https://github.com/sisaman/GAP.Show less >
Language :
Anglais
Peer reviewed article :
Oui
Audience :
Internationale
Popular science :
Non
Collections :
Source :
Files
- document
- Open access
- Access the document
- 2203.00949.pdf
- Open access
- Access the document
- 2203.00949
- Open access
- Access the document
- document
- Open access
- Access the document
- 2203.00949.pdf
- Open access
- Access the document