Don't Forget What I did?: Assessing Client ...
Document type :
Pré-publication ou Document de travail
Title :
Don't Forget What I did?: Assessing Client Contributions in Federated Learning
Author(s) :
Ghosh, Bishwamittra [Auteur]
Max Planck Institute for Software Systems [MPI-SWS]
Basu, Debabrota [Auteur]
Centrale Lille
Université de Lille
Centre Inria de l'Université de Lille
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Scool [Scool]
Huazhu, Fu [Auteur]
Institute of High Performance Computing [IHPC]
Yuan, Wang [Auteur]
Institute of High Performance Computing [IHPC]
Kanagavelu, Renuga [Auteur]
Institute of High Performance Computing [IHPC]
Peng, Jiang Jin [Auteur]
Yong, Liu [Auteur]
Institute of High Performance Computing [IHPC]
Rick, Goh Siow Mong [Auteur]
Institute of High Performance Computing [IHPC]
Qingsong, Wei [Auteur]
Institute of High Performance Computing [IHPC]
Max Planck Institute for Software Systems [MPI-SWS]
Basu, Debabrota [Auteur]
Centrale Lille
Université de Lille
Centre Inria de l'Université de Lille
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Scool [Scool]
Huazhu, Fu [Auteur]
Institute of High Performance Computing [IHPC]
Yuan, Wang [Auteur]
Institute of High Performance Computing [IHPC]
Kanagavelu, Renuga [Auteur]
Institute of High Performance Computing [IHPC]
Peng, Jiang Jin [Auteur]
Yong, Liu [Auteur]
Institute of High Performance Computing [IHPC]
Rick, Goh Siow Mong [Auteur]
Institute of High Performance Computing [IHPC]
Qingsong, Wei [Auteur]
Institute of High Performance Computing [IHPC]
Publication date :
2024-03-11
English keyword(s) :
Federated learning FL
Client contributions
Shapley value
Dynamic Shapley values
Client selection
Malacious clinet detection
Poisoning Attacks
Integer linear program ILP
Two-sided fairness
Client contributions
Shapley value
Dynamic Shapley values
Client selection
Malacious clinet detection
Poisoning Attacks
Integer linear program ILP
Two-sided fairness
HAL domain(s) :
Informatique [cs]/Apprentissage [cs.LG]
Informatique [cs]/Intelligence artificielle [cs.AI]
Informatique [cs]/Cryptographie et sécurité [cs.CR]
Informatique [cs]/Ordinateur et société [cs.CY]
Informatique [cs]/Calcul parallèle, distribué et partagé [cs.DC]
Informatique [cs]/Informatique et théorie des jeux [cs.GT]
Informatique [cs]/Intelligence artificielle [cs.AI]
Informatique [cs]/Cryptographie et sécurité [cs.CR]
Informatique [cs]/Ordinateur et société [cs.CY]
Informatique [cs]/Calcul parallèle, distribué et partagé [cs.DC]
Informatique [cs]/Informatique et théorie des jeux [cs.GT]
English abstract : [en]
Federated Learning (FL) is a collaborative machine learning (ML) approach, where multiple clients participate in training an ML model without exposing the private data. Fair and accurate assessment of client contributions ...
Show more >Federated Learning (FL) is a collaborative machine learning (ML) approach, where multiple clients participate in training an ML model without exposing the private data. Fair and accurate assessment of client contributions is an important problem in FL to facilitate incentive allocation and encouraging diverse clients to participate in a unified model training. Existing methods for assessing client contribution adopts co-operative game-theoretic concepts, such as Shapley values, but under simplified assumptions. In this paper, we propose a history-aware game-theoretic framework, called FLContrib, to assess client contributions when a subset of (potentially non-i.i.d.) clients participate in each epoch of FL training. By exploiting the FL training process and linearity of Shapley value, we develop FLContrib that yields a historical timeline of client contributions as FL training progresses over epochs. Additionally, to assess client contribution under limited computational budget, we propose a scheduling procedure that considers a two-sided fairness criteria to perform expensive Shapley value computation only in a subset of training epochs. In experiments, we demonstrate a controlled trade-off between the correctness and efficiency of client contributions assessed via FLContrib. To demonstrate the benefits of history-aware client contributions, we apply FLContrib to detect dishonest clients conducting data poisoning in FL training.Show less >
Show more >Federated Learning (FL) is a collaborative machine learning (ML) approach, where multiple clients participate in training an ML model without exposing the private data. Fair and accurate assessment of client contributions is an important problem in FL to facilitate incentive allocation and encouraging diverse clients to participate in a unified model training. Existing methods for assessing client contribution adopts co-operative game-theoretic concepts, such as Shapley values, but under simplified assumptions. In this paper, we propose a history-aware game-theoretic framework, called FLContrib, to assess client contributions when a subset of (potentially non-i.i.d.) clients participate in each epoch of FL training. By exploiting the FL training process and linearity of Shapley value, we develop FLContrib that yields a historical timeline of client contributions as FL training progresses over epochs. Additionally, to assess client contribution under limited computational budget, we propose a scheduling procedure that considers a two-sided fairness criteria to perform expensive Shapley value computation only in a subset of training epochs. In experiments, we demonstrate a controlled trade-off between the correctness and efficiency of client contributions assessed via FLContrib. To demonstrate the benefits of history-aware client contributions, we apply FLContrib to detect dishonest clients conducting data poisoning in FL training.Show less >
Language :
Anglais
ANR Project :
Collections :
Source :
Files
- 2403.07151
- Open access
- Access the document