Universality of Bayesian mixture predictors
Type de document :
Communication dans un congrès avec actes
Titre :
Universality of Bayesian mixture predictors
Auteur(s) :
Titre de la manifestation scientifique :
ALT 2017 - 28th International Conference on Algorithmic Learning Theory
Ville :
Kyoto
Pays :
Japon
Date de début de la manifestation scientifique :
2017-10-15
Discipline(s) HAL :
Informatique [cs]/Apprentissage [cs.LG]
Statistiques [stat]/Théorie [stat.TH]
Statistiques [stat]/Théorie [stat.TH]
Résumé en anglais : [en]
The problem is that of sequential probability forecasting for finite-valued time series. Thedata is generated by an unknown probability distribution over the space of all one-way infinitesequences. It is known that this ...
Lire la suite >The problem is that of sequential probability forecasting for finite-valued time series. Thedata is generated by an unknown probability distribution over the space of all one-way infinitesequences. It is known that this measure belongs to a given set C, but the latter is completelyarbitrary (uncountably infinite, without any structure given). The performance is measured withasymptotic average log loss. In this work it is shown that the minimax asymptotic performanceis always attainable, and it is attained by a convex combination of a countably many measuresfrom the set C (a Bayesian mixture). This was previously only known for the case when thebest achievable asymptotic error is 0. This also contrasts previous results that show that in thenon-realizable case all Bayesian mixtures may be suboptimal, while there is a predictor thatachieves the optimal performance.Lire moins >
Lire la suite >The problem is that of sequential probability forecasting for finite-valued time series. Thedata is generated by an unknown probability distribution over the space of all one-way infinitesequences. It is known that this measure belongs to a given set C, but the latter is completelyarbitrary (uncountably infinite, without any structure given). The performance is measured withasymptotic average log loss. In this work it is shown that the minimax asymptotic performanceis always attainable, and it is attained by a convex combination of a countably many measuresfrom the set C (a Bayesian mixture). This was previously only known for the case when thebest achievable asymptotic error is 0. This also contrasts previous results that show that in thenon-realizable case all Bayesian mixtures may be suboptimal, while there is a predictor thatachieves the optimal performance.Lire moins >
Langue :
Anglais
Comité de lecture :
Oui
Audience :
Internationale
Vulgarisation :
Non
Collections :
Source :
Fichiers
- http://arxiv.org/pdf/1610.08249
- Accès libre
- Accéder au document
- 1610.08249
- Accès libre
- Accéder au document