Vers des dictionnaires de taille optimale: ...
Document type :
Compte-rendu et recension critique d'ouvrage
Title :
Vers des dictionnaires de taille optimale: une approche bayésienne non paramétrique
Author(s) :
Dang, Hong Phuong [Auteur]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Chainais, Pierre [Auteur]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Chainais, Pierre [Auteur]

Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Journal title :
Journal of Signal Processing Systems
Pages :
221–232
Publisher :
Springer
Publication date :
2018
ISSN :
1939-8018
HAL domain(s) :
Informatique [cs]/Traitement des images [eess.IV]
Statistiques [stat]/Machine Learning [stat.ML]
Mathématiques [math]/Statistiques [math.ST]
Statistiques [stat]/Machine Learning [stat.ML]
Mathématiques [math]/Statistiques [math.ST]
English abstract : [en]
Solving inverse problems usually calls for adapted priors such as the definition of a well chosen representation of possible solutions. One family of approaches relies on learning redundant dictionaries for sparse ...
Show more >Solving inverse problems usually calls for adapted priors such as the definition of a well chosen representation of possible solutions. One family of approaches relies on learning redundant dictionaries for sparse representation. In image processing, dictionary learning is applied to sets of patches. Many methods work with a dictionary with a number of atoms that is fixed in advance. Moreover optimization methods often call for the prior knowledge of the noise level to tune regularization parameters. We propose a Bayesian non parametric approach that is able to learn a dictionary of adapted size. The use of an Indian Buffet Process prior permits to learn an adequate number of atoms. The noise level is also accurately estimated so that nearly no parameter tuning is needed. We illustrate the relevance of the resulting dictionaries on numerical experiments.Show less >
Show more >Solving inverse problems usually calls for adapted priors such as the definition of a well chosen representation of possible solutions. One family of approaches relies on learning redundant dictionaries for sparse representation. In image processing, dictionary learning is applied to sets of patches. Many methods work with a dictionary with a number of atoms that is fixed in advance. Moreover optimization methods often call for the prior knowledge of the noise level to tune regularization parameters. We propose a Bayesian non parametric approach that is able to learn a dictionary of adapted size. The use of an Indian Buffet Process prior permits to learn an adequate number of atoms. The noise level is also accurately estimated so that nearly no parameter tuning is needed. We illustrate the relevance of the resulting dictionaries on numerical experiments.Show less >
Language :
Anglais
Popular science :
Non
Collections :
Source :
Files
- https://hal.archives-ouvertes.fr/hal-01433621v2/document
- Open access
- Access the document
- https://hal.archives-ouvertes.fr/hal-01433621v2/document
- Open access
- Access the document
- https://hal.archives-ouvertes.fr/hal-01433621v2/document
- Open access
- Access the document
- document
- Open access
- Access the document
- Dang_Chainais_JSPS_final.pdf
- Open access
- Access the document