Fully Decentralized Joint Learning of ...
Type de document :
Rapport de recherche: Autre communication scientifique (congrès sans actes - poster - séminaire...)
Titre :
Fully Decentralized Joint Learning of Personalized Models and Collaboration Graphs
Auteur(s) :
Zantedeschi, Valentina [Auteur]
Laboratoire Hubert Curien [LabHC]
Bellet, Aurelien [Auteur]
Machine Learning in Information Networks [MAGNET]
Tommasi, Marc [Auteur]
Machine Learning in Information Networks [MAGNET]
Laboratoire Hubert Curien [LabHC]
Bellet, Aurelien [Auteur]
Machine Learning in Information Networks [MAGNET]
Tommasi, Marc [Auteur]
Machine Learning in Information Networks [MAGNET]
Institution :
Inria
Date de publication :
2019
Discipline(s) HAL :
Informatique [cs]/Apprentissage [cs.LG]
Statistiques [stat]/Machine Learning [stat.ML]
Statistiques [stat]/Machine Learning [stat.ML]
Résumé en anglais : [en]
We consider the fully decentralized machine learning scenario where many users with personal datasets collaborate to learn models through local peer-to-peer exchanges , without a central coordinator. We propose to train ...
Lire la suite >We consider the fully decentralized machine learning scenario where many users with personal datasets collaborate to learn models through local peer-to-peer exchanges , without a central coordinator. We propose to train personalized models that leverage a collaboration graph describing the relationships between the users' personal tasks, which we learn jointly with the models. Our fully decentralized optimization procedure alternates between training nonlinear models given the graph in a greedy boosting manner, and updating the collaboration graph (with controlled sparsity) given the models. Throughout the process, users exchange messages only with a small number of peers (their direct neighbors in the graph and a few random users), ensuring that the procedure naturally scales to large numbers of users. We analyze the convergence rate, memory and communication complexity of our approach, and demonstrate its benefits compared to competing techniques on synthetic and real datasets.Lire moins >
Lire la suite >We consider the fully decentralized machine learning scenario where many users with personal datasets collaborate to learn models through local peer-to-peer exchanges , without a central coordinator. We propose to train personalized models that leverage a collaboration graph describing the relationships between the users' personal tasks, which we learn jointly with the models. Our fully decentralized optimization procedure alternates between training nonlinear models given the graph in a greedy boosting manner, and updating the collaboration graph (with controlled sparsity) given the models. Throughout the process, users exchange messages only with a small number of peers (their direct neighbors in the graph and a few random users), ensuring that the procedure naturally scales to large numbers of users. We analyze the convergence rate, memory and communication complexity of our approach, and demonstrate its benefits compared to competing techniques on synthetic and real datasets.Lire moins >
Langue :
Anglais
Projet ANR :
Collections :
Source :
Fichiers
- https://hal.inria.fr/hal-02166433/document
- Accès libre
- Accéder au document
- https://hal.inria.fr/hal-02166433/document
- Accès libre
- Accéder au document
- https://hal.inria.fr/hal-02166433/document
- Accès libre
- Accéder au document
- https://hal.inria.fr/hal-02166433/document
- Accès libre
- Accéder au document
- document
- Accès libre
- Accéder au document
- 1901.08460.pdf
- Accès libre
- Accéder au document