Zero Knowledge Arguments for Verifiable Sampling
Type de document :
Autre communication scientifique (congrès sans actes - poster - séminaire...)
Titre :
Zero Knowledge Arguments for Verifiable Sampling
Auteur(s) :
Sabater, César [Auteur]
Machine Learning in Information Networks [MAGNET]
Ramon, Jan [Auteur]
Machine Learning in Information Networks [MAGNET]
Machine Learning in Information Networks [MAGNET]
Ramon, Jan [Auteur]
Machine Learning in Information Networks [MAGNET]
Titre de la manifestation scientifique :
NeurIPS 2021 Workshop Privacy in Machine Learning
Ville :
Sydney (Virtual)
Pays :
Australie
Date de début de la manifestation scientifique :
2021-12-14
Discipline(s) HAL :
Informatique [cs]/Apprentissage [cs.LG]
Informatique [cs]/Cryptographie et sécurité [cs.CR]
Informatique [cs]/Cryptographie et sécurité [cs.CR]
Résumé en anglais : [en]
In privacy-preserving machine learning, it is less obvious to verify correct behavior of participants because they are not supposed to reveal their inputs in cleartext to other participants. It is hence important to make ...
Lire la suite >In privacy-preserving machine learning, it is less obvious to verify correct behavior of participants because they are not supposed to reveal their inputs in cleartext to other participants. It is hence important to make federated machine learning robust against data poisoning and related attacks. While input data can be related to a distributed ledger (blockchain), a less studied input is formed by the random sampling parties perform. In this paper, we describe strategies based on zero knowledge proofs to allow parties to prove they perform sampling (and other computations) correctly. We sketch a number of alternative ways to implement our idea and provide some preliminary experimental results.Lire moins >
Lire la suite >In privacy-preserving machine learning, it is less obvious to verify correct behavior of participants because they are not supposed to reveal their inputs in cleartext to other participants. It is hence important to make federated machine learning robust against data poisoning and related attacks. While input data can be related to a distributed ledger (blockchain), a less studied input is formed by the random sampling parties perform. In this paper, we describe strategies based on zero knowledge proofs to allow parties to prove they perform sampling (and other computations) correctly. We sketch a number of alternative ways to implement our idea and provide some preliminary experimental results.Lire moins >
Langue :
Anglais
Comité de lecture :
Oui
Audience :
Internationale
Vulgarisation :
Non
Projet ANR :
Collections :
Source :
Fichiers
- https://hal.inria.fr/hal-03464840/document
- Accès libre
- Accéder au document
- https://hal.inria.fr/hal-03464840/document
- Accès libre
- Accéder au document
- https://hal.inria.fr/hal-03464840/document
- Accès libre
- Accéder au document
- document
- Accès libre
- Accéder au document
- main.pdf
- Accès libre
- Accéder au document