Decentralized Collaborative Learning of ...
Type de document :
Rapport de recherche
Titre :
Decentralized Collaborative Learning of Personalized Models over Networks
Auteur(s) :
Vanhaesebrouck, Paul [Auteur]
Machine Learning in Information Networks [MAGNET]
Bellet, Aurelien [Auteur]
Machine Learning in Information Networks [MAGNET]
Tommasi, Marc [Auteur]
Université de Lille
Machine Learning in Information Networks [MAGNET]
Machine Learning in Information Networks [MAGNET]
Bellet, Aurelien [Auteur]

Machine Learning in Information Networks [MAGNET]
Tommasi, Marc [Auteur]

Université de Lille
Machine Learning in Information Networks [MAGNET]
Institution :
INRIA Lille
Date de publication :
2016-10-17
Discipline(s) HAL :
Informatique [cs]/Apprentissage [cs.LG]
Informatique [cs]/Calcul parallèle, distribué et partagé [cs.DC]
Informatique [cs]/Systèmes et contrôle [cs.SY]
Statistiques [stat]/Autres [stat.ML]
Informatique [cs]/Calcul parallèle, distribué et partagé [cs.DC]
Informatique [cs]/Systèmes et contrôle [cs.SY]
Statistiques [stat]/Autres [stat.ML]
Résumé en anglais : [en]
We consider a set of learning agents in a col-laborative peer-to-peer network, where each agent learns a personalized model according to its own learning objective. The question addressed in this paper is: how can agents ...
Lire la suite >We consider a set of learning agents in a col-laborative peer-to-peer network, where each agent learns a personalized model according to its own learning objective. The question addressed in this paper is: how can agents improve upon their locally trained model by communicating with other agents that have similar objectives? We introduce and analyze two asynchronous gossip algorithms running in a fully decentralized manner. Our first approach , inspired from label propagation, aims to smooth pre-trained local models over the network while accounting for the confidence that each agent has in its initial model. In our second approach, agents jointly learn and propagate their model by making iterative updates based on both their local dataset and the behavior of their neighbors. Our algorithm to optimize this challenging objective in a decentralized way is based on ADMM.Lire moins >
Lire la suite >We consider a set of learning agents in a col-laborative peer-to-peer network, where each agent learns a personalized model according to its own learning objective. The question addressed in this paper is: how can agents improve upon their locally trained model by communicating with other agents that have similar objectives? We introduce and analyze two asynchronous gossip algorithms running in a fully decentralized manner. Our first approach , inspired from label propagation, aims to smooth pre-trained local models over the network while accounting for the confidence that each agent has in its initial model. In our second approach, agents jointly learn and propagate their model by making iterative updates based on both their local dataset and the behavior of their neighbors. Our algorithm to optimize this challenging objective in a decentralized way is based on ADMM.Lire moins >
Langue :
Anglais
Collections :
Source :
Fichiers
- https://hal.inria.fr/hal-01383544/document
- Accès libre
- Accéder au document
- http://arxiv.org/pdf/1610.05202
- Accès libre
- Accéder au document
- https://hal.inria.fr/hal-01383544/document
- Accès libre
- Accéder au document
- https://hal.inria.fr/hal-01383544/document
- Accès libre
- Accéder au document
- document
- Accès libre
- Accéder au document
- main_arXiv.pdf
- Accès libre
- Accéder au document
- 1610.05202
- Accès libre
- Accéder au document
- document
- Accès libre
- Accéder au document
- main_arXiv.pdf
- Accès libre
- Accéder au document