Privacy-Preserving Adversarial Representation ...
Document type :
Communication dans un congrès avec actes
Title :
Privacy-Preserving Adversarial Representation Learning in ASR: Reality or Illusion?
Author(s) :
Srivastava, Brij Mohan Lal [Auteur]
Machine Learning in Information Networks [MAGNET]
Bellet, Aurelien [Auteur]
Machine Learning in Information Networks [MAGNET]
Tommasi, Marc [Auteur]
Machine Learning in Information Networks [MAGNET]
Vincent, Emmanuel [Auteur]
Speech Modeling for Facilitating Oral-Based Communication [MULTISPEECH]
Machine Learning in Information Networks [MAGNET]
Bellet, Aurelien [Auteur]
Machine Learning in Information Networks [MAGNET]
Tommasi, Marc [Auteur]
Machine Learning in Information Networks [MAGNET]
Vincent, Emmanuel [Auteur]
Speech Modeling for Facilitating Oral-Based Communication [MULTISPEECH]
Conference title :
INTERSPEECH 2019 - 20th Annual Conference of the International Speech Communication Association
City :
Graz
Country :
Autriche
Start date of the conference :
2019-09-15
English keyword(s) :
Speaker recognition
Adversarial training
End-to-end system
Privacy
Speech recognition
Adversarial training
End-to-end system
Privacy
Speech recognition
HAL domain(s) :
Informatique [cs]/Apprentissage [cs.LG]
Statistiques [stat]/Machine Learning [stat.ML]
Informatique [cs]/Informatique et langage [cs.CL]
Statistiques [stat]/Machine Learning [stat.ML]
Informatique [cs]/Informatique et langage [cs.CL]
English abstract : [en]
Automatic speech recognition (ASR) is a key technology in many services and applications. This typically requires user devices to send their speech data to the cloud for ASR decoding. As the speech signal carries a lot of ...
Show more >Automatic speech recognition (ASR) is a key technology in many services and applications. This typically requires user devices to send their speech data to the cloud for ASR decoding. As the speech signal carries a lot of information about the speaker, this raises serious privacy concerns. As a solution, an encoder may reside on each user device which performs local computations to anonymize the representation. In this paper, we focus on the protection of speaker identity and study the extent to which users can be recognized based on the encoded representation of their speech as obtained by a deep encoder-decoder architecture trained for ASR. Through speaker identification and verification experiments on the Librispeech corpus with open and closed sets of speakers, we show that the representations obtained from a standard architecture still carry a lot of information about speaker identity. We then propose to use adversarial training to learn representations that perform well in ASR while hiding speaker identity. Our results demonstrate that adversarial training dramatically reduces the closed-set classification accuracy, but this does not translate into increased open-set verification error hence into increased protection of the speaker identity in practice. We suggest several possible reasons behind this negative result.Show less >
Show more >Automatic speech recognition (ASR) is a key technology in many services and applications. This typically requires user devices to send their speech data to the cloud for ASR decoding. As the speech signal carries a lot of information about the speaker, this raises serious privacy concerns. As a solution, an encoder may reside on each user device which performs local computations to anonymize the representation. In this paper, we focus on the protection of speaker identity and study the extent to which users can be recognized based on the encoded representation of their speech as obtained by a deep encoder-decoder architecture trained for ASR. Through speaker identification and verification experiments on the Librispeech corpus with open and closed sets of speakers, we show that the representations obtained from a standard architecture still carry a lot of information about speaker identity. We then propose to use adversarial training to learn representations that perform well in ASR while hiding speaker identity. Our results demonstrate that adversarial training dramatically reduces the closed-set classification accuracy, but this does not translate into increased open-set verification error hence into increased protection of the speaker identity in practice. We suggest several possible reasons behind this negative result.Show less >
Language :
Anglais
Peer reviewed article :
Oui
Audience :
Internationale
Popular science :
Non
ANR Project :
Collections :
Source :
Files
- https://hal.inria.fr/hal-02166434/document
- Open access
- Access the document
- https://hal.inria.fr/hal-02166434/document
- Open access
- Access the document
- https://hal.inria.fr/hal-02166434/document
- Open access
- Access the document
- document
- Open access
- Access the document
- srivastava_IS19.pdf
- Open access
- Access the document
- document
- Open access
- Access the document
- srivastava_IS19.pdf
- Open access
- Access the document