Sliced-Wasserstein normalizing flows: ...
Document type :
Communication dans un congrès avec actes
Title :
Sliced-Wasserstein normalizing flows: beyond maximum likelihood training
Author(s) :
Coeurdoux, Florentin [Auteur]
Signal et Communications [IRIT-SC]
Dobigeon, Nicolas [Auteur]
Institut universitaire de France [IUF]
Signal et Communications [IRIT-SC]
Chainais, Pierre [Auteur]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Signal et Communications [IRIT-SC]
Dobigeon, Nicolas [Auteur]
Institut universitaire de France [IUF]
Signal et Communications [IRIT-SC]
Chainais, Pierre [Auteur]

Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Conference title :
30th European Symposium on Artificial Neural Networks (ESANN 2022)
City :
Bruges
Country :
Belgique
Start date of the conference :
2022-10-05
Book title :
à paraître
HAL domain(s) :
Informatique [cs]/Intelligence artificielle [cs.AI]
Informatique [cs]/Réseau de neurones [cs.NE]
Informatique [cs]/Réseau de neurones [cs.NE]
English abstract : [en]
Despite their advantages, normalizing flows generally suffer from several shortcomings including their tendency to generate unrealistic data (e.g., images) and their failing to detect out-of-distribution data. One reason ...
Show more >Despite their advantages, normalizing flows generally suffer from several shortcomings including their tendency to generate unrealistic data (e.g., images) and their failing to detect out-of-distribution data. One reason for these deficiencies lies in the training strategy which traditionally exploits a maximum likelihood principle only. This paper proposes a new training paradigm based on a hybrid objective function combining the maximum likelihood principle (MLE) and a sliced-Wasserstein distance. Results obtained on synthetic toy examples and real image data sets show better generative abilities in terms of both likelihood and visual aspects of the generated samples. Reciprocally, the proposed approach leads to a lower likelihood of out-of-distribution data, demonstrating a greater data fidelity of the resulting flows.Show less >
Show more >Despite their advantages, normalizing flows generally suffer from several shortcomings including their tendency to generate unrealistic data (e.g., images) and their failing to detect out-of-distribution data. One reason for these deficiencies lies in the training strategy which traditionally exploits a maximum likelihood principle only. This paper proposes a new training paradigm based on a hybrid objective function combining the maximum likelihood principle (MLE) and a sliced-Wasserstein distance. Results obtained on synthetic toy examples and real image data sets show better generative abilities in terms of both likelihood and visual aspects of the generated samples. Reciprocally, the proposed approach leads to a lower likelihood of out-of-distribution data, demonstrating a greater data fidelity of the resulting flows.Show less >
Language :
Anglais
Peer reviewed article :
Oui
Audience :
Internationale
Popular science :
Non
ANR Project :
Comment :
European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning
Collections :
Source :
Files
- https://hal.archives-ouvertes.fr/hal-03720995/document
- Open access
- Access the document
- http://arxiv.org/pdf/2207.05468
- Open access
- Access the document
- https://hal.archives-ouvertes.fr/hal-03720995/document
- Open access
- Access the document
- https://hal.archives-ouvertes.fr/hal-03720995/document
- Open access
- Access the document
- document
- Open access
- Access the document
- Sliced_Wasserstein_normalizing_flows__beyond_maximum_likelihood_training__ESANN_.pdf
- Open access
- Access the document
- 2207.05468
- Open access
- Access the document
- document
- Open access
- Access the document
- Sliced_Wasserstein_normalizing_flows__beyond_maximum_likelihood_training__ESANN_.pdf
- Open access
- Access the document