"How Biased are Your Features?": Computing ...
Type de document :
Communication dans un congrès avec actes
DOI :
Titre :
"How Biased are Your Features?": Computing Fairness Influence Functions with Global Sensitivity Analysis
Auteur(s) :
Ghosh, Bishwamittra [Auteur]
National University of Singapore [NUS]
Basu, Debabrota [Auteur]
Scool [Scool]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Université de Lille
Meel, Kuldeep [Auteur]
National University of Singapore [NUS]
National University of Singapore [NUS]
Basu, Debabrota [Auteur]
Scool [Scool]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Université de Lille
Meel, Kuldeep [Auteur]
National University of Singapore [NUS]
Titre de la manifestation scientifique :
FAccT '23: the 2023 ACM Conference on Fairness, Accountability, and Transparency
Ville :
Chicago IL
Pays :
Etats-Unis d'Amérique
Date de début de la manifestation scientifique :
2023-06-12
Éditeur :
ACM
Mot(s)-clé(s) en anglais :
Bias in AI
Fairness Verification
Algorithm auditing
Explainable Artificial Intelligence
Global sensitivity analysis
Variance decomposition
Fair ML
Fairness Verification
Algorithm auditing
Explainable Artificial Intelligence
Global sensitivity analysis
Variance decomposition
Fair ML
Discipline(s) HAL :
Informatique [cs]/Apprentissage [cs.LG]
Informatique [cs]/Intelligence artificielle [cs.AI]
Statistiques [stat]/Applications [stat.AP]
Informatique [cs]/Intelligence artificielle [cs.AI]
Statistiques [stat]/Applications [stat.AP]
Résumé en anglais : [en]
Fairness in machine learning has attained significant focus due to the widespread application in high-stake decision-making tasks. Unregulated machine learning classifiers can exhibit bias towards certain demographic groups ...
Lire la suite >Fairness in machine learning has attained significant focus due to the widespread application in high-stake decision-making tasks. Unregulated machine learning classifiers can exhibit bias towards certain demographic groups in data, thus the quantification and mitigation of classifier bias is a central concern in fairness in machine learning. In this paper, we aim to quantify the influence of different features in a dataset on the bias of a classifier. To do this, we introduce the Fairness Influence Function (FIF). This function breaks down bias into its components among individual features and the intersection of multiple features. The key idea is to represent existing group fairness metrics as the difference of the scaled conditional variances in the classifier’s prediction and apply a decomposition of variance according to global sensitivity analysis. To estimate FIFs, we instantiate an algorithm that applies variance decomposition of classifier’s prediction following local regression. Experiments demonstrate that captures FIFs of individual feature and intersectional features, provides a better approximation of bias based on FIFs, demonstrates higher correlation of FIFs with fairness interventions, and detects changes in bias due to fairness affirmative/punitive actions in the classifier.The code is available at https://github.com/ReAILe/bias-explainer. The extended version of the paper is at https://arxiv.org/pdf/2206.00667.pdf.Lire moins >
Lire la suite >Fairness in machine learning has attained significant focus due to the widespread application in high-stake decision-making tasks. Unregulated machine learning classifiers can exhibit bias towards certain demographic groups in data, thus the quantification and mitigation of classifier bias is a central concern in fairness in machine learning. In this paper, we aim to quantify the influence of different features in a dataset on the bias of a classifier. To do this, we introduce the Fairness Influence Function (FIF). This function breaks down bias into its components among individual features and the intersection of multiple features. The key idea is to represent existing group fairness metrics as the difference of the scaled conditional variances in the classifier’s prediction and apply a decomposition of variance according to global sensitivity analysis. To estimate FIFs, we instantiate an algorithm that applies variance decomposition of classifier’s prediction following local regression. Experiments demonstrate that captures FIFs of individual feature and intersectional features, provides a better approximation of bias based on FIFs, demonstrates higher correlation of FIFs with fairness interventions, and detects changes in bias due to fairness affirmative/punitive actions in the classifier.The code is available at https://github.com/ReAILe/bias-explainer. The extended version of the paper is at https://arxiv.org/pdf/2206.00667.pdf.Lire moins >
Langue :
Anglais
Comité de lecture :
Oui
Audience :
Internationale
Vulgarisation :
Non
Collections :
Source :
Fichiers
- http://arxiv.org/pdf/2206.00667
- Accès libre
- Accéder au document
- 2206.00667
- Accès libre
- Accéder au document