Application to Neural Architecture Optimization for Image Classification
Document type :
Pré-publication ou Document de travail
Title :
A Bandit Approach with Evolutionary Operators for Model Selection
Application to Neural Architecture Optimization for Image Classification
Application to Neural Architecture Optimization for Image Classification
Author(s) :
Brégère, Margaux [Auteur]
Laboratoire de Probabilités, Statistique et Modélisation [LPSM (UMR_8001)]
EDF R&D [EDF R&D]
Keisler, Julie [Auteur]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
EDF R&D [EDF R&D]
Inria Lille - Nord Europe
Laboratoire de Probabilités, Statistique et Modélisation [LPSM (UMR_8001)]
EDF R&D [EDF R&D]
Keisler, Julie [Auteur]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
EDF R&D [EDF R&D]
Inria Lille - Nord Europe
English keyword(s) :
Infinite-armed bandits ; Model selection ; Neural architecture optimisation ; Hyperparameter optimisation; Evolutionnary algorithm ; Image classification ; AutoML ; Online Learning
HAL domain(s) :
Informatique [cs]/Apprentissage [cs.LG]
Informatique [cs]/Intelligence artificielle [cs.AI]
Informatique [cs]/Réseau de neurones [cs.NE]
Mathématiques [math]/Optimisation et contrôle [math.OC]
Informatique [cs]/Intelligence artificielle [cs.AI]
Informatique [cs]/Réseau de neurones [cs.NE]
Mathématiques [math]/Optimisation et contrôle [math.OC]
English abstract : [en]
This paper formulates model selection as an infinite-armed bandit problem. The models are arms, and picking an arm corresponds to a partial training of the model (resource allocation). The reward is the accuracy of the ...
Show more >This paper formulates model selection as an infinite-armed bandit problem. The models are arms, and picking an arm corresponds to a partial training of the model (resource allocation). The reward is the accuracy of the selected model after its partial training. In this best arm identification problem, regret is the gap between the expected accuracy of the optimal model and that of the model finally chosen. We first consider a straightforward generalization of UCB-E to the stochastic infinite-armed bandit problem and show that, under basic assumptions, the expected regret order is $T^{-\alpha}$ for some $\alpha \in (0,1/5)$ and $T$ the number of resources to allocate. From this vanilla algorithm, we introduce the algorithm Mutant-UCB that incorporates operators from evolutionary algorithms. Tests carried out on three open source image classification data sets attest to the relevance of this novel combining approach, which outperforms the state-of-the-art for a fixed budget.Show less >
Show more >This paper formulates model selection as an infinite-armed bandit problem. The models are arms, and picking an arm corresponds to a partial training of the model (resource allocation). The reward is the accuracy of the selected model after its partial training. In this best arm identification problem, regret is the gap between the expected accuracy of the optimal model and that of the model finally chosen. We first consider a straightforward generalization of UCB-E to the stochastic infinite-armed bandit problem and show that, under basic assumptions, the expected regret order is $T^{-\alpha}$ for some $\alpha \in (0,1/5)$ and $T$ the number of resources to allocate. From this vanilla algorithm, we introduce the algorithm Mutant-UCB that incorporates operators from evolutionary algorithms. Tests carried out on three open source image classification data sets attest to the relevance of this novel combining approach, which outperforms the state-of-the-art for a fixed budget.Show less >
Language :
Anglais
Collections :
Source :
Files
- document
- Open access
- Access the document
- main.pdf
- Open access
- Access the document
- 2402.05144
- Open access
- Access the document