Algorithmic fairness verification with ...
Type de document :
Communication dans un congrès avec actes
Titre :
Algorithmic fairness verification with graphical models
Auteur(s) :
Ghosh, Bishwamittra [Auteur]
National University of Singapore [NUS]
Basu, Debabrota [Auteur]
Scool [Scool]
Meel, Kuldeep S. [Auteur]
School of computing [Singapore] [NUS]
National University of Singapore [NUS]
Basu, Debabrota [Auteur]
Scool [Scool]
Meel, Kuldeep S. [Auteur]
School of computing [Singapore] [NUS]
Titre de la manifestation scientifique :
AAAI-2022 - 36th AAAI Conference on Artificial Intelligence
Ville :
Virtual
Pays :
Etats-Unis d'Amérique
Date de début de la manifestation scientifique :
2022-02-22
Date de publication :
2022
Discipline(s) HAL :
Sciences de l'Homme et Société/Philosophie
Sciences de l'Homme et Société/Histoire, Philosophie et Sociologie des sciences
Informatique [cs]/Ordinateur et société [cs.CY]
Informatique [cs]/Géométrie algorithmique [cs.CG]
Sciences cognitives/Neurosciences
Informatique [cs]/Intelligence artificielle [cs.AI]
Informatique [cs]/Complexité [cs.CC]
Informatique [cs]
Sciences cognitives/Informatique
Mathématiques [math]/Statistiques [math.ST]
Mathématiques [math]/Optimisation et contrôle [math.OC]
Mathématiques [math]/Probabilités [math.PR]
Mathématiques [math]
Mathématiques [math]/Théorie de l'information et codage [math.IT]
Mathématiques [math]/Géométrie différentielle [math.DG]
Mathématiques [math]/Systèmes dynamiques [math.DS]
Informatique [cs]/Systèmes et contrôle [cs.SY]
Informatique [cs]/Robotique [cs.RO]
Computer Science [cs]/Operations Research [math.OC]
Informatique [cs]/Apprentissage [cs.LG]
Informatique [cs]/Théorie de l'information [cs.IT]
Informatique [cs]/Informatique et théorie des jeux [cs.GT]
Informatique [cs]/Mathématique discrète [cs.DM]
Sciences de l'Homme et Société/Histoire, Philosophie et Sociologie des sciences
Informatique [cs]/Ordinateur et société [cs.CY]
Informatique [cs]/Géométrie algorithmique [cs.CG]
Sciences cognitives/Neurosciences
Informatique [cs]/Intelligence artificielle [cs.AI]
Informatique [cs]/Complexité [cs.CC]
Informatique [cs]
Sciences cognitives/Informatique
Mathématiques [math]/Statistiques [math.ST]
Mathématiques [math]/Optimisation et contrôle [math.OC]
Mathématiques [math]/Probabilités [math.PR]
Mathématiques [math]
Mathématiques [math]/Théorie de l'information et codage [math.IT]
Mathématiques [math]/Géométrie différentielle [math.DG]
Mathématiques [math]/Systèmes dynamiques [math.DS]
Informatique [cs]/Systèmes et contrôle [cs.SY]
Informatique [cs]/Robotique [cs.RO]
Computer Science [cs]/Operations Research [math.OC]
Informatique [cs]/Apprentissage [cs.LG]
Informatique [cs]/Théorie de l'information [cs.IT]
Informatique [cs]/Informatique et théorie des jeux [cs.GT]
Informatique [cs]/Mathématique discrète [cs.DM]
Résumé en anglais : [en]
In recent years, machine learning (ML) algorithms have been deployed in safety-critical and high-stake decision-making, where the fairness of algorithms is of paramount importance. Fairness in ML centers on detecting bias ...
Lire la suite >In recent years, machine learning (ML) algorithms have been deployed in safety-critical and high-stake decision-making, where the fairness of algorithms is of paramount importance. Fairness in ML centers on detecting bias towards certain demographic populations induced by an ML classifier and proposes algorithmic solutions to mitigate the bias with respect to different fairness definitions. To this end, several fairness verifiers have been proposed that compute the bias in the prediction of an ML classifier—essentially beyond a finite dataset—given the probability distribution of input features. In the context of verifying linear classifiers, existing fairness verifiers are limited by accuracy due to imprecise modeling of correlations among features and scalability due to restrictive formulations of the classifiers as SSAT/SMT formulas or by sampling. In this paper, we propose an efficient fairness verifier, called FVGM, that encodes the correlations among features as a Bayesian network. In contrast to existing verifiers, FVGM proposes a stochastic subset-sum based approach for verifying linear classifiers. Experimentally, we show that FVGM leads to an accurate and scalable assessment for more diverse families of fairness-enhancing algorithms, fairness attacks, and group/causal fairness metrics than the state-of-the-art. We also demonstrate that FVGM facilitates the computation of fairness influence functions as a stepping stone to detect the source of bias induced by subsets of features.Lire moins >
Lire la suite >In recent years, machine learning (ML) algorithms have been deployed in safety-critical and high-stake decision-making, where the fairness of algorithms is of paramount importance. Fairness in ML centers on detecting bias towards certain demographic populations induced by an ML classifier and proposes algorithmic solutions to mitigate the bias with respect to different fairness definitions. To this end, several fairness verifiers have been proposed that compute the bias in the prediction of an ML classifier—essentially beyond a finite dataset—given the probability distribution of input features. In the context of verifying linear classifiers, existing fairness verifiers are limited by accuracy due to imprecise modeling of correlations among features and scalability due to restrictive formulations of the classifiers as SSAT/SMT formulas or by sampling. In this paper, we propose an efficient fairness verifier, called FVGM, that encodes the correlations among features as a Bayesian network. In contrast to existing verifiers, FVGM proposes a stochastic subset-sum based approach for verifying linear classifiers. Experimentally, we show that FVGM leads to an accurate and scalable assessment for more diverse families of fairness-enhancing algorithms, fairness attacks, and group/causal fairness metrics than the state-of-the-art. We also demonstrate that FVGM facilitates the computation of fairness influence functions as a stepping stone to detect the source of bias induced by subsets of features.Lire moins >
Langue :
Anglais
Comité de lecture :
Oui
Audience :
Internationale
Vulgarisation :
Non
Collections :
Source :
Fichiers
- https://hal.archives-ouvertes.fr/hal-03770361/document
- Accès libre
- Accéder au document
- http://arxiv.org/pdf/2109.09447
- Accès libre
- Accéder au document
- https://hal.archives-ouvertes.fr/hal-03770361/document
- Accès libre
- Accéder au document
- document
- Accès libre
- Accéder au document
- 2109.09447.pdf
- Accès libre
- Accéder au document
- 2109.09447
- Accès libre
- Accéder au document