Computational Information Geometry for ...
Document type :
Compte-rendu et recension critique d'ouvrage
DOI :
Title :
Computational Information Geometry for Binary Classification of High-Dimensional Random Tensors
Author(s) :
Pham, Gia-Thuy [Auteur]
Laboratoire des signaux et systèmes [L2S]
Boyer, Remy [Auteur]
Laboratoire des signaux et systèmes [L2S]
Nielsen, Frank [Auteur]
Laboratoire d'informatique de l'École polytechnique [Palaiseau] [LIX]
Laboratoire des signaux et systèmes [L2S]
Boyer, Remy [Auteur]
Laboratoire des signaux et systèmes [L2S]
Nielsen, Frank [Auteur]
Laboratoire d'informatique de l'École polytechnique [Palaiseau] [LIX]
Journal title :
Entropy
Pages :
203
Publisher :
MDPI
Publication date :
2018
ISSN :
1099-4300
English keyword(s) :
optimal Bayesian detection
information geometry
minimal error probability
Chernoff/Bhattacharyya upper bound
large random tensor
Fisher information
large random sensing matrix
information geometry
minimal error probability
Chernoff/Bhattacharyya upper bound
large random tensor
Fisher information
large random sensing matrix
HAL domain(s) :
Statistiques [stat]/Autres [stat.ML]
English abstract : [en]
Evaluating the performance of Bayesian classification in a high-dimensional random tensor is a fundamental problem, usually difficult and under-studied. In this work, we consider two Signal to Noise Ratio (SNR)-based binary ...
Show more >Evaluating the performance of Bayesian classification in a high-dimensional random tensor is a fundamental problem, usually difficult and under-studied. In this work, we consider two Signal to Noise Ratio (SNR)-based binary classification problems of interest. Under the alternative hypothesis, i.e., for a non-zero SNR, the observed signals are either a noisy rank-R tensor admitting a Q-order Canonical Polyadic Decomposition (CPD) with large factors of size N q × R, i.e., for 1 ≤ q ≤ Q, where R, N q → ∞ with R 1/q /N q converge towards a finite constant or a noisy tensor admitting TucKer Decomposition (TKD) of multilinear (M 1 ,. .. , M Q)-rank with large factors of size N q × M q , i.e., for 1 ≤ q ≤ Q, where N q , M q → ∞ with M q /N q converge towards a finite constant. The classification of the random entries (coefficients) of the core tensor in the CPD/TKD is hard to study since the exact derivation of the minimal Bayes' error probability is mathematically intractable. To circumvent this difficulty, the Chernoff Upper Bound (CUB) for larger SNR and the Fisher information at low SNR are derived and studied, based on information geometry theory. The tightest CUB is reached for the value minimizing the error exponent, denoted by s. In general, due to the asymmetry of the s-divergence, the Bhattacharyya Upper Bound (BUB) (that is, the Chernoff Information calculated at s = 1/2) cannot solve this problem effectively. As a consequence, we rely on a costly numerical optimization strategy to find s. However, thanks to powerful random matrix theory tools, a simple analytical expression of s is provided with respect to the Signal to Noise Ratio (SNR) in the two schemes considered. This work shows that the BUB is the tightest bound at low SNRs. However, for higher SNRs, the latest property is no longer true.Show less >
Show more >Evaluating the performance of Bayesian classification in a high-dimensional random tensor is a fundamental problem, usually difficult and under-studied. In this work, we consider two Signal to Noise Ratio (SNR)-based binary classification problems of interest. Under the alternative hypothesis, i.e., for a non-zero SNR, the observed signals are either a noisy rank-R tensor admitting a Q-order Canonical Polyadic Decomposition (CPD) with large factors of size N q × R, i.e., for 1 ≤ q ≤ Q, where R, N q → ∞ with R 1/q /N q converge towards a finite constant or a noisy tensor admitting TucKer Decomposition (TKD) of multilinear (M 1 ,. .. , M Q)-rank with large factors of size N q × M q , i.e., for 1 ≤ q ≤ Q, where N q , M q → ∞ with M q /N q converge towards a finite constant. The classification of the random entries (coefficients) of the core tensor in the CPD/TKD is hard to study since the exact derivation of the minimal Bayes' error probability is mathematically intractable. To circumvent this difficulty, the Chernoff Upper Bound (CUB) for larger SNR and the Fisher information at low SNR are derived and studied, based on information geometry theory. The tightest CUB is reached for the value minimizing the error exponent, denoted by s. In general, due to the asymmetry of the s-divergence, the Bhattacharyya Upper Bound (BUB) (that is, the Chernoff Information calculated at s = 1/2) cannot solve this problem effectively. As a consequence, we rely on a costly numerical optimization strategy to find s. However, thanks to powerful random matrix theory tools, a simple analytical expression of s is provided with respect to the Signal to Noise Ratio (SNR) in the two schemes considered. This work shows that the BUB is the tightest bound at low SNRs. However, for higher SNRs, the latest property is no longer true.Show less >
Language :
Anglais
Popular science :
Non
Collections :
Source :
Files
- https://hal-centralesupelec.archives-ouvertes.fr/hal-01758637/document
- Open access
- Access the document
- https://doi.org/10.3390/e20030203
- Open access
- Access the document
- https://hal-centralesupelec.archives-ouvertes.fr/hal-01758637/document
- Open access
- Access the document
- document
- Open access
- Access the document
- entropy-20-00203.pdf
- Open access
- Access the document
- e20030203
- Open access
- Access the document
- document
- Open access
- Access the document
- entropy-20-00203.pdf
- Open access
- Access the document