Bayesian anti-sparse coding
Document type :
Compte-rendu et recension critique d'ouvrage
DOI :
Title :
Bayesian anti-sparse coding
Author(s) :
Elvira, Clément [Auteur]
Centrale Lille
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Chainais, Pierre [Auteur]
Centrale Lille
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Dobigeon, Nicolas [Auteur]
Signal et Communications [IRIT-SC]
Institut National Polytechnique (Toulouse) [Toulouse INP]
Centrale Lille
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Chainais, Pierre [Auteur]
Centrale Lille
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Dobigeon, Nicolas [Auteur]
Signal et Communications [IRIT-SC]
Institut National Polytechnique (Toulouse) [Toulouse INP]
Journal title :
IEEE Transactions on Signal Processing
Publisher :
Institute of Electrical and Electronics Engineers
Publication date :
2016-12
ISSN :
1053-587X
English keyword(s) :
democratic distribution
anti-sparse representation
proximal operator
anti-sparse representation
proximal operator
HAL domain(s) :
Sciences de l'ingénieur [physics]/Traitement du signal et de l'image [eess.SP]
Mathématiques [math]/Statistiques [math.ST]
Statistiques [stat]/Méthodologie [stat.ME]
Statistiques [stat]/Applications [stat.AP]
Mathématiques [math]/Statistiques [math.ST]
Statistiques [stat]/Méthodologie [stat.ME]
Statistiques [stat]/Applications [stat.AP]
English abstract : [en]
Sparse representations have proven their efficiency in solving a wide class of inverse problems encountered in signal and image processing. Conversely, enforcing the information to be spread uniformly over representation ...
Show more >Sparse representations have proven their efficiency in solving a wide class of inverse problems encountered in signal and image processing. Conversely, enforcing the information to be spread uniformly over representation coefficients exhibits relevant properties in various applications such as robust encoding in digital communications. Anti-sparse regularization can be naturally expressed through an ∞-norm penalty. This paper derives a probabilistic formulation of such representations. A new probability distribution, referred to as the democratic prior, is first introduced. Its main properties as well as three random variate generators for this distribution are derived. Then this probability distribution is used as a prior to promote anti-sparsity in a Gaussian linear model, yielding a fully Bayesian formulation of anti-sparse coding. Two Markov chain Monte Carlo (MCMC) algorithms are proposed to generate samples according to the posterior distribution. The first one is a standard Gibbs sampler. The second one uses Metropolis-Hastings moves that exploit the proximity mapping of the log-posterior distribution. These samples are used to approximate maximum a posteriori and minimum mean square error estimators of both parameters and hyperparameters. Simulations on synthetic data illustrate the performances of the two proposed samplers, for both complete and over-complete dictionaries. All results are compared to the recent deterministic variational FITRA algorithm.Show less >
Show more >Sparse representations have proven their efficiency in solving a wide class of inverse problems encountered in signal and image processing. Conversely, enforcing the information to be spread uniformly over representation coefficients exhibits relevant properties in various applications such as robust encoding in digital communications. Anti-sparse regularization can be naturally expressed through an ∞-norm penalty. This paper derives a probabilistic formulation of such representations. A new probability distribution, referred to as the democratic prior, is first introduced. Its main properties as well as three random variate generators for this distribution are derived. Then this probability distribution is used as a prior to promote anti-sparsity in a Gaussian linear model, yielding a fully Bayesian formulation of anti-sparse coding. Two Markov chain Monte Carlo (MCMC) algorithms are proposed to generate samples according to the posterior distribution. The first one is a standard Gibbs sampler. The second one uses Metropolis-Hastings moves that exploit the proximity mapping of the log-posterior distribution. These samples are used to approximate maximum a posteriori and minimum mean square error estimators of both parameters and hyperparameters. Simulations on synthetic data illustrate the performances of the two proposed samplers, for both complete and over-complete dictionaries. All results are compared to the recent deterministic variational FITRA algorithm.Show less >
Language :
Anglais
Popular science :
Non
Collections :
Source :
Files
- https://hal.archives-ouvertes.fr/hal-01433706/document
- Open access
- Access the document
- http://arxiv.org/pdf/1512.06086
- Open access
- Access the document
- https://hal.archives-ouvertes.fr/hal-01433706/document
- Open access
- Access the document
- https://hal.archives-ouvertes.fr/hal-01433706/document
- Open access
- Access the document
- Elvira_Chainais_Dobigeon_TSP2016.pdf
- Open access
- Access the document
- Elvira_Chainais_Dobigeon_TSP2016.pdf
- Open access
- Access the document
- document
- Open access
- Access the document