Trade-offs in Large-Scale Distributed ...
Document type :
Communication dans un congrès avec actes
Title :
Trade-offs in Large-Scale Distributed Tuplewise Estimation and Learning
Author(s) :
Vogel, Robin [Auteur]
Signal, Statistique et Apprentissage [S2A]
Département Images, Données, Signal [IDS]
Bellet, Aurelien [Auteur]
Machine Learning in Information Networks [MAGNET]
Clémençon, Stéphan [Auteur]
Signal, Statistique et Apprentissage [S2A]
Département Images, Données, Signal [IDS]
Jelassi, Ons [Auteur]
Signal, Statistique et Apprentissage [S2A]
Département Images, Données, Signal [IDS]
Papa, Guillaume [Auteur]
Signal, Statistique et Apprentissage [S2A]
Département Images, Données, Signal [IDS]
Signal, Statistique et Apprentissage [S2A]
Département Images, Données, Signal [IDS]
Bellet, Aurelien [Auteur]
Machine Learning in Information Networks [MAGNET]
Clémençon, Stéphan [Auteur]
Signal, Statistique et Apprentissage [S2A]
Département Images, Données, Signal [IDS]
Jelassi, Ons [Auteur]
Signal, Statistique et Apprentissage [S2A]
Département Images, Données, Signal [IDS]
Papa, Guillaume [Auteur]
Signal, Statistique et Apprentissage [S2A]
Département Images, Données, Signal [IDS]
Conference title :
ECML PKDD 2019 - European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases
City :
Würzburg
Country :
Allemagne
Start date of the conference :
2019-09-16
Book title :
Lecture Notes in Computer Science (LNAI,volume 11907)
Publisher :
Springer
Publication date :
2020-04-30
English keyword(s) :
Distributed Machine Learning
Distributed Data Processing
U-Statistics
AUC Optimization
Distributed Data Processing
U-Statistics
AUC Optimization
HAL domain(s) :
Informatique [cs]/Apprentissage [cs.LG]
Statistiques [stat]/Machine Learning [stat.ML]
Statistiques [stat]/Machine Learning [stat.ML]
English abstract : [en]
The development of cluster computing frameworks has allowed practitioners to scale out various statistical estimation and machine learning algorithms with minimal programming effort. This is especially true for machine ...
Show more >The development of cluster computing frameworks has allowed practitioners to scale out various statistical estimation and machine learning algorithms with minimal programming effort. This is especially true for machine learning problems whose objective function is nicely separable across individual data points, such as classification and regression. In contrast, statistical learning tasks involving pairs (or more generally tuples) of data points-such as metric learning, clustering or ranking-do not lend themselves as easily to data-parallelism and in-memory computing. In this paper, we investigate how to balance between statistical performance and computational efficiency in such distributed tuplewise statistical problems. We first propose a simple strategy based on occasionally repartitioning data across workers between parallel computation stages, where the number of repartition-ing steps rules the trade-off between accuracy and runtime. We then present some theoretical results highlighting the benefits brought by the proposed method in terms of variance reduction, and extend our results to design distributed stochastic gradient descent algorithms for tuplewise empirical risk minimization. Our results are supported by numerical experiments in pairwise statistical estimation and learning on synthetic and real-world datasets.Show less >
Show more >The development of cluster computing frameworks has allowed practitioners to scale out various statistical estimation and machine learning algorithms with minimal programming effort. This is especially true for machine learning problems whose objective function is nicely separable across individual data points, such as classification and regression. In contrast, statistical learning tasks involving pairs (or more generally tuples) of data points-such as metric learning, clustering or ranking-do not lend themselves as easily to data-parallelism and in-memory computing. In this paper, we investigate how to balance between statistical performance and computational efficiency in such distributed tuplewise statistical problems. We first propose a simple strategy based on occasionally repartitioning data across workers between parallel computation stages, where the number of repartition-ing steps rules the trade-off between accuracy and runtime. We then present some theoretical results highlighting the benefits brought by the proposed method in terms of variance reduction, and extend our results to design distributed stochastic gradient descent algorithms for tuplewise empirical risk minimization. Our results are supported by numerical experiments in pairwise statistical estimation and learning on synthetic and real-world datasets.Show less >
Language :
Anglais
Peer reviewed article :
Oui
Audience :
Internationale
Popular science :
Non
Collections :
Source :
Files
- https://hal.inria.fr/hal-02166428/document
- Open access
- Access the document
- https://hal.inria.fr/hal-02166428/document
- Open access
- Access the document
- https://hal.inria.fr/hal-02166428/document
- Open access
- Access the document
- document
- Open access
- Access the document
- ecml19.pdf
- Open access
- Access the document
- document
- Open access
- Access the document
- ecml19.pdf
- Open access
- Access the document