Bayesian anti-sparse coding
Type de document :
Compte-rendu et recension critique d'ouvrage
DOI :
Titre :
Bayesian anti-sparse coding
Auteur(s) :
Elvira, Clément [Auteur]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Centrale Lille
Chainais, Pierre [Auteur]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Centrale Lille
Dobigeon, Nicolas [Auteur]
Institut National Polytechnique (Toulouse) [Toulouse INP]
Signal et Communications [IRIT-SC]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Centrale Lille
Chainais, Pierre [Auteur]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Centrale Lille
Dobigeon, Nicolas [Auteur]
Institut National Polytechnique (Toulouse) [Toulouse INP]
Signal et Communications [IRIT-SC]
Titre de la revue :
IEEE Transactions on Signal Processing
Éditeur :
Institute of Electrical and Electronics Engineers
Date de publication :
2016-12
ISSN :
1053-587X
Mot(s)-clé(s) en anglais :
democratic distribution
anti-sparse representation
proximal operator
anti-sparse representation
proximal operator
Discipline(s) HAL :
Sciences de l'ingénieur [physics]/Traitement du signal et de l'image [eess.SP]
Mathématiques [math]/Statistiques [math.ST]
Statistiques [stat]/Méthodologie [stat.ME]
Statistiques [stat]/Applications [stat.AP]
Mathématiques [math]/Statistiques [math.ST]
Statistiques [stat]/Méthodologie [stat.ME]
Statistiques [stat]/Applications [stat.AP]
Résumé en anglais : [en]
Sparse representations have proven their efficiency in solving a wide class of inverse problems encountered in signal and image processing. Conversely, enforcing the information to be spread uniformly over representation ...
Lire la suite >Sparse representations have proven their efficiency in solving a wide class of inverse problems encountered in signal and image processing. Conversely, enforcing the information to be spread uniformly over representation coefficients exhibits relevant properties in various applications such as robust encoding in digital communications. Anti-sparse regularization can be naturally expressed through an ∞-norm penalty. This paper derives a probabilistic formulation of such representations. A new probability distribution, referred to as the democratic prior, is first introduced. Its main properties as well as three random variate generators for this distribution are derived. Then this probability distribution is used as a prior to promote anti-sparsity in a Gaussian linear model, yielding a fully Bayesian formulation of anti-sparse coding. Two Markov chain Monte Carlo (MCMC) algorithms are proposed to generate samples according to the posterior distribution. The first one is a standard Gibbs sampler. The second one uses Metropolis-Hastings moves that exploit the proximity mapping of the log-posterior distribution. These samples are used to approximate maximum a posteriori and minimum mean square error estimators of both parameters and hyperparameters. Simulations on synthetic data illustrate the performances of the two proposed samplers, for both complete and over-complete dictionaries. All results are compared to the recent deterministic variational FITRA algorithm.Lire moins >
Lire la suite >Sparse representations have proven their efficiency in solving a wide class of inverse problems encountered in signal and image processing. Conversely, enforcing the information to be spread uniformly over representation coefficients exhibits relevant properties in various applications such as robust encoding in digital communications. Anti-sparse regularization can be naturally expressed through an ∞-norm penalty. This paper derives a probabilistic formulation of such representations. A new probability distribution, referred to as the democratic prior, is first introduced. Its main properties as well as three random variate generators for this distribution are derived. Then this probability distribution is used as a prior to promote anti-sparsity in a Gaussian linear model, yielding a fully Bayesian formulation of anti-sparse coding. Two Markov chain Monte Carlo (MCMC) algorithms are proposed to generate samples according to the posterior distribution. The first one is a standard Gibbs sampler. The second one uses Metropolis-Hastings moves that exploit the proximity mapping of the log-posterior distribution. These samples are used to approximate maximum a posteriori and minimum mean square error estimators of both parameters and hyperparameters. Simulations on synthetic data illustrate the performances of the two proposed samplers, for both complete and over-complete dictionaries. All results are compared to the recent deterministic variational FITRA algorithm.Lire moins >
Langue :
Anglais
Vulgarisation :
Non
Collections :
Source :
Fichiers
- https://hal.archives-ouvertes.fr/hal-01433706/document
- Accès libre
- Accéder au document
- http://arxiv.org/pdf/1512.06086
- Accès libre
- Accéder au document
- https://hal.archives-ouvertes.fr/hal-01433706/document
- Accès libre
- Accéder au document
- https://hal.archives-ouvertes.fr/hal-01433706/document
- Accès libre
- Accéder au document
- Elvira_Chainais_Dobigeon_TSP2016.pdf
- Accès libre
- Accéder au document
- Elvira_Chainais_Dobigeon_TSP2016.pdf
- Accès libre
- Accéder au document
- document
- Accès libre
- Accéder au document