Improved large-scale graph learning through ...
Type de document :
Communication dans un congrès avec actes
Titre :
Improved large-scale graph learning through ridge spectral sparsification
Auteur(s) :
Calandriello, Daniele [Auteur]
Italian Institute of Technology = Istituto Italiano di Tecnologia [IIT]
Sequential Learning [SEQUEL]
Koutis, Ioannis [Auteur]
New Jersey Institute of Technology [Newark] [NJIT]
Lazaric, Alessandro [Auteur]
Facebook AI Research [Paris] [FAIR]
Valko, Michal [Auteur]
Sequential Learning [SEQUEL]
Italian Institute of Technology = Istituto Italiano di Tecnologia [IIT]
Sequential Learning [SEQUEL]
Koutis, Ioannis [Auteur]
New Jersey Institute of Technology [Newark] [NJIT]
Lazaric, Alessandro [Auteur]

Facebook AI Research [Paris] [FAIR]
Valko, Michal [Auteur]

Sequential Learning [SEQUEL]
Titre de la manifestation scientifique :
International Conference on Machine Learning
Ville :
Stockholm
Pays :
Suède
Date de début de la manifestation scientifique :
2018-07-10
Discipline(s) HAL :
Statistiques [stat]/Machine Learning [stat.ML]
Résumé en anglais : [en]
The representation and learning benefits of methods based on graph Laplacians, such as Lapla-cian smoothing or harmonic function solution for semi-supervised learning (SSL), are empirically and theoretically well supported. ...
Lire la suite >The representation and learning benefits of methods based on graph Laplacians, such as Lapla-cian smoothing or harmonic function solution for semi-supervised learning (SSL), are empirically and theoretically well supported. Nonetheless , the exact versions of these methods scale poorly with the number of nodes n of the graph. In this paper, we combine a spectral sparsifica-tion routine with Laplacian learning. Given a graph G as input, our algorithm computes a sparsi-fier in a distributed way in O(n log 3 (n)) time, O(m log 3 (n)) work and O(n log(n)) memory, using only log(n) rounds of communication. Furthermore , motivated by the regularization often employed in learning algorithms, we show that constructing sparsifiers that preserve the spectrum of the Laplacian only up to the regularization level may drastically reduce the size of the final graph. By constructing a spectrally-similar graph, we are able to bound the error induced by the sparsifica-tion for a variety of downstream tasks (e.g., SSL). We empirically validate the theoretical guarantees on Amazon co-purchase graph and compare to the state-of-the-art heuristics.Lire moins >
Lire la suite >The representation and learning benefits of methods based on graph Laplacians, such as Lapla-cian smoothing or harmonic function solution for semi-supervised learning (SSL), are empirically and theoretically well supported. Nonetheless , the exact versions of these methods scale poorly with the number of nodes n of the graph. In this paper, we combine a spectral sparsifica-tion routine with Laplacian learning. Given a graph G as input, our algorithm computes a sparsi-fier in a distributed way in O(n log 3 (n)) time, O(m log 3 (n)) work and O(n log(n)) memory, using only log(n) rounds of communication. Furthermore , motivated by the regularization often employed in learning algorithms, we show that constructing sparsifiers that preserve the spectrum of the Laplacian only up to the regularization level may drastically reduce the size of the final graph. By constructing a spectrally-similar graph, we are able to bound the error induced by the sparsifica-tion for a variety of downstream tasks (e.g., SSL). We empirically validate the theoretical guarantees on Amazon co-purchase graph and compare to the state-of-the-art heuristics.Lire moins >
Langue :
Anglais
Comité de lecture :
Oui
Audience :
Internationale
Vulgarisation :
Non
Collections :
Source :
Fichiers
- https://hal.inria.fr/hal-01810980/document
- Accès libre
- Accéder au document
- https://hal.inria.fr/hal-01810980/document
- Accès libre
- Accéder au document
- https://hal.inria.fr/hal-01810980/document
- Accès libre
- Accéder au document
- document
- Accès libre
- Accéder au document
- calandriello2018improved.pdf
- Accès libre
- Accéder au document
- document
- Accès libre
- Accéder au document
- calandriello2018improved.pdf
- Accès libre
- Accéder au document