Plug-and-Play Split Gibbs Sampler: Embedding ...
Document type :
Article dans une revue scientifique: Article original
DOI :
Title :
Plug-and-Play Split Gibbs Sampler: Embedding Deep Generative Priors in Bayesian Inference
Author(s) :
Coeurdoux, Florentin [Auteur]
Signal et Communications [IRIT-SC]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Dobigeon, Nicolas [Auteur]
Signal et Communications [IRIT-SC]
Chainais, Pierre [Auteur]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Signal et Communications [IRIT-SC]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Dobigeon, Nicolas [Auteur]
Signal et Communications [IRIT-SC]
Chainais, Pierre [Auteur]

Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Journal title :
IEEE Transactions on Image Processing
Pages :
3496-3507
Publisher :
Institute of Electrical and Electronics Engineers
Publication date :
2024-05
ISSN :
1057-7149
English keyword(s) :
Bayesian inference
plug-and-play prior
deep generative model
diffusion-based model
Markov chain Monte Carlo
inverse problem
plug-and-play prior
deep generative model
diffusion-based model
Markov chain Monte Carlo
inverse problem
HAL domain(s) :
Statistiques [stat]/Machine Learning [stat.ML]
Informatique [cs]/Vision par ordinateur et reconnaissance de formes [cs.CV]
Informatique [cs]/Apprentissage [cs.LG]
Informatique [cs]/Traitement des images [eess.IV]
Informatique [cs]/Vision par ordinateur et reconnaissance de formes [cs.CV]
Informatique [cs]/Apprentissage [cs.LG]
Informatique [cs]/Traitement des images [eess.IV]
English abstract : [en]
This paper introduces a stochastic plug-and-play (PnP) sampling algorithm that leverages variable splitting to efficiently sample from a posterior distribution. The algorithm based on split Gibbs sampling (SGS) draws ...
Show more >This paper introduces a stochastic plug-and-play (PnP) sampling algorithm that leverages variable splitting to efficiently sample from a posterior distribution. The algorithm based on split Gibbs sampling (SGS) draws inspiration from the alternating direction method of multipliers (ADMM). It divides the challenging task of posterior sampling into two simpler sampling problems. The first problem depends on the likelihood function, while the second is interpreted as a Bayesian denoising problem that can be readily carried out by a deep generative model. Specifically, for an illustrative purpose, the proposed method is implemented in this paper using state-of-the-art diffusion-based generative models. Akin to its deterministic PnP-based counterparts, the proposed method exhibits the great advantage of not requiring an explicit choice of the prior distribution, which is rather encoded into a pre-trained generative model. However, unlike optimization methods (e.g., PnP-ADMM) which generally provide only point estimates, the proposed approach allows conventional Bayesian estimators to be accompanied by confidence intervals at a reasonable additional computational cost. Experiments on commonly studied image processing problems illustrate the efficiency of the proposed sampling strategy. Its performance is compared to recent state-of-the-art optimization and sampling methods.Show less >
Show more >This paper introduces a stochastic plug-and-play (PnP) sampling algorithm that leverages variable splitting to efficiently sample from a posterior distribution. The algorithm based on split Gibbs sampling (SGS) draws inspiration from the alternating direction method of multipliers (ADMM). It divides the challenging task of posterior sampling into two simpler sampling problems. The first problem depends on the likelihood function, while the second is interpreted as a Bayesian denoising problem that can be readily carried out by a deep generative model. Specifically, for an illustrative purpose, the proposed method is implemented in this paper using state-of-the-art diffusion-based generative models. Akin to its deterministic PnP-based counterparts, the proposed method exhibits the great advantage of not requiring an explicit choice of the prior distribution, which is rather encoded into a pre-trained generative model. However, unlike optimization methods (e.g., PnP-ADMM) which generally provide only point estimates, the proposed approach allows conventional Bayesian estimators to be accompanied by confidence intervals at a reasonable additional computational cost. Experiments on commonly studied image processing problems illustrate the efficiency of the proposed sampling strategy. Its performance is compared to recent state-of-the-art optimization and sampling methods.Show less >
Language :
Anglais
Peer reviewed article :
Oui
Audience :
Internationale
Popular science :
Non
ANR Project :
Collections :
Source :
Files
- document
- Open access
- Access the document
- Coeurdoux_IEEE_Trans_IP_2024.pdf
- Open access
- Access the document
- 2304.11134
- Open access
- Access the document
- document
- Open access
- Access the document
- Coeurdoux_IEEE_Trans_IP_2024.pdf
- Open access
- Access the document