Optimal Classification under Performative ...
Type de document :
Autre communication scientifique (congrès sans actes - poster - séminaire...): Communication dans un congrès avec actes
URL permanente :
Titre :
Optimal Classification under Performative Distribution Shift
Auteur(s) :
Cyffers, Edwige [Auteur]
Machine Learning in Information Networks [MAGNET]
Université de Lille
Pydi, Muni Sreenivas [Auteur]
Machine Intelligence and Learning Systems [MILES]
Laboratoire d'analyse et modélisation de systèmes pour l'aide à la décision [LAMSADE]
Atif, Jamal [Auteur]
Laboratoire d'analyse et modélisation de systèmes pour l'aide à la décision [LAMSADE]
Cappé, Olivier [Auteur]
Département d'informatique - ENS-PSL [DI-ENS]
Machine Learning in Information Networks [MAGNET]
Université de Lille
Pydi, Muni Sreenivas [Auteur]
Machine Intelligence and Learning Systems [MILES]
Laboratoire d'analyse et modélisation de systèmes pour l'aide à la décision [LAMSADE]
Atif, Jamal [Auteur]
Laboratoire d'analyse et modélisation de systèmes pour l'aide à la décision [LAMSADE]
Cappé, Olivier [Auteur]
Département d'informatique - ENS-PSL [DI-ENS]
Titre de la manifestation scientifique :
38th Conference on Neural Information Processing Systems
Ville :
Vancouver (Canada)
Pays :
Canada
Date de début de la manifestation scientifique :
2024-12-10
Discipline(s) HAL :
Informatique [cs]/Apprentissage [cs.LG]
Résumé en anglais : [en]
<div><p>Performative learning addresses the increasingly pervasive situations in which algorithmic decisions may induce changes in the data distribution as a consequence of their public deployment. We propose a novel view ...
Lire la suite ><div><p>Performative learning addresses the increasingly pervasive situations in which algorithmic decisions may induce changes in the data distribution as a consequence of their public deployment. We propose a novel view in which these performative effects are modelled as push-forward measures. This general framework encompasses existing models and enables novel performative gradient estimation methods, leading to more efficient and scalable learning strategies. For distribution shifts, unlike previous models which require full specification of the data distribution, we only assume knowledge of the shift operator that represents the performative changes. This approach can also be integrated into various change-of-variablebased models, such as VAEs or normalizing flows. Focusing on classification with a linear-in-parameters performative effect, we prove the convexity of the performative risk under a new set of assumptions. Notably, we do not limit the strength of performative effects but rather their direction, requiring only that classification becomes harder when deploying more accurate models. In this case, we also establish a connection with adversarially robust classification by reformulating the minimization of the performative risk as a min-max variational problem. Finally, we illustrate our approach on synthetic and real datasets.</p></div>Lire moins >
Lire la suite ><div><p>Performative learning addresses the increasingly pervasive situations in which algorithmic decisions may induce changes in the data distribution as a consequence of their public deployment. We propose a novel view in which these performative effects are modelled as push-forward measures. This general framework encompasses existing models and enables novel performative gradient estimation methods, leading to more efficient and scalable learning strategies. For distribution shifts, unlike previous models which require full specification of the data distribution, we only assume knowledge of the shift operator that represents the performative changes. This approach can also be integrated into various change-of-variablebased models, such as VAEs or normalizing flows. Focusing on classification with a linear-in-parameters performative effect, we prove the convexity of the performative risk under a new set of assumptions. Notably, we do not limit the strength of performative effects but rather their direction, requiring only that classification becomes harder when deploying more accurate models. In this case, we also establish a connection with adversarially robust classification by reformulating the minimization of the performative risk as a min-max variational problem. Finally, we illustrate our approach on synthetic and real datasets.</p></div>Lire moins >
Langue :
Anglais
Comité de lecture :
Oui
Audience :
Internationale
Vulgarisation :
Non
Projet ANR :
Collections :
Source :
Date de dépôt :
2024-11-05T03:05:43Z
Fichiers
- document
- Accès libre
- Accéder au document
- paper.pdf
- Accès libre
- Accéder au document