Application to Neural Architecture Optimization for Image Classification
Type de document :
Pré-publication ou Document de travail
Titre :
A Bandit Approach with Evolutionary Operators for Model Selection
Application to Neural Architecture Optimization for Image Classification
Application to Neural Architecture Optimization for Image Classification
Auteur(s) :
Brégère, Margaux [Auteur]
Laboratoire de Probabilités, Statistique et Modélisation [LPSM (UMR_8001)]
EDF R&D [EDF R&D]
Keisler, Julie [Auteur]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
EDF R&D [EDF R&D]
Inria Lille - Nord Europe
Laboratoire de Probabilités, Statistique et Modélisation [LPSM (UMR_8001)]
EDF R&D [EDF R&D]
Keisler, Julie [Auteur]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
EDF R&D [EDF R&D]
Inria Lille - Nord Europe
Mot(s)-clé(s) en anglais :
Infinite-armed bandits ; Model selection ; Neural architecture optimisation ; Hyperparameter optimisation; Evolutionnary algorithm ; Image classification ; AutoML ; Online Learning
Discipline(s) HAL :
Informatique [cs]/Apprentissage [cs.LG]
Informatique [cs]/Intelligence artificielle [cs.AI]
Informatique [cs]/Réseau de neurones [cs.NE]
Mathématiques [math]/Optimisation et contrôle [math.OC]
Informatique [cs]/Intelligence artificielle [cs.AI]
Informatique [cs]/Réseau de neurones [cs.NE]
Mathématiques [math]/Optimisation et contrôle [math.OC]
Résumé en anglais : [en]
This paper formulates model selection as an infinite-armed bandit problem. The models are arms, and picking an arm corresponds to a partial training of the model (resource allocation). The reward is the accuracy of the ...
Lire la suite >This paper formulates model selection as an infinite-armed bandit problem. The models are arms, and picking an arm corresponds to a partial training of the model (resource allocation). The reward is the accuracy of the selected model after its partial training. In this best arm identification problem, regret is the gap between the expected accuracy of the optimal model and that of the model finally chosen. We first consider a straightforward generalization of UCB-E to the stochastic infinite-armed bandit problem and show that, under basic assumptions, the expected regret order is $T^{-\alpha}$ for some $\alpha \in (0,1/5)$ and $T$ the number of resources to allocate. From this vanilla algorithm, we introduce the algorithm Mutant-UCB that incorporates operators from evolutionary algorithms. Tests carried out on three open source image classification data sets attest to the relevance of this novel combining approach, which outperforms the state-of-the-art for a fixed budget.Lire moins >
Lire la suite >This paper formulates model selection as an infinite-armed bandit problem. The models are arms, and picking an arm corresponds to a partial training of the model (resource allocation). The reward is the accuracy of the selected model after its partial training. In this best arm identification problem, regret is the gap between the expected accuracy of the optimal model and that of the model finally chosen. We first consider a straightforward generalization of UCB-E to the stochastic infinite-armed bandit problem and show that, under basic assumptions, the expected regret order is $T^{-\alpha}$ for some $\alpha \in (0,1/5)$ and $T$ the number of resources to allocate. From this vanilla algorithm, we introduce the algorithm Mutant-UCB that incorporates operators from evolutionary algorithms. Tests carried out on three open source image classification data sets attest to the relevance of this novel combining approach, which outperforms the state-of-the-art for a fixed budget.Lire moins >
Langue :
Anglais
Collections :
Source :
Fichiers
- document
- Accès libre
- Accéder au document
- main.pdf
- Accès libre
- Accéder au document
- 2402.05144
- Accès libre
- Accéder au document