Speeding up the Multi-objective NAS Through ...
Type de document :
Communication dans un congrès avec actes
Titre :
Speeding up the Multi-objective NAS Through Incremental Learning
Auteur(s) :
Garcia-Garcia, Cosijopii [Auteur]
Instituto Nacional de Astrofísica, Óptica y Electrónica [INAOE]
Derbel, Bilel [Auteur]
Université de Lille
Inria Lille - Nord Europe
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Optimisation de grande taille et calcul large échelle [BONUS]
Morales-Reyes, Alicia [Auteur]
Instituto Nacional de Astrofísica, Óptica y Electrónica [INAOE]
Escalante, Hugo Jair [Auteur]
Instituto Nacional de Astrofísica, Óptica y Electrónica [INAOE]
Instituto Nacional de Astrofísica, Óptica y Electrónica [INAOE]
Derbel, Bilel [Auteur]

Université de Lille
Inria Lille - Nord Europe
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Optimisation de grande taille et calcul large échelle [BONUS]
Morales-Reyes, Alicia [Auteur]
Instituto Nacional de Astrofísica, Óptica y Electrónica [INAOE]
Escalante, Hugo Jair [Auteur]
Instituto Nacional de Astrofísica, Óptica y Electrónica [INAOE]
Titre de la manifestation scientifique :
23rd Mexican International Conference on Artificial Intelligence
Ville :
Puebla
Pays :
Mexique
Date de début de la manifestation scientifique :
2024-10-21
Titre de l’ouvrage :
LNCS LNAI
Titre de la revue :
Lecture Notes in Computer Science
Éditeur :
Springer Nature Switzerland
Lieu de publication :
Cham
Date de publication :
2025-10-17
Discipline(s) HAL :
Informatique [cs]/Intelligence artificielle [cs.AI]
Résumé en anglais : [en]
Deep neural networks (DNNs), particularly convolutional neural networks (CNNs), have garnered significant attention in recent years for addressing a wide range of challenges in image processing and computer vision. Neural ...
Lire la suite >Deep neural networks (DNNs), particularly convolutional neural networks (CNNs), have garnered significant attention in recent years for addressing a wide range of challenges in image processing and computer vision. Neural architecture search (NAS) has emerged as a crucial field aiming to automate the design and configuration of CNN models. In this paper, we propose a novel strategy to speed up the performance estimation of neural architectures by gradually increasing the size of the training set used for evaluation as the search progresses. We evaluate this approach using the CGP-NASV2 model, a multi-objective NAS method, on the CIFAR-100 dataset. Experimental results demonstrate a notable acceleration in the search process, achieving a speedup of 4.6 times compared to the baseline. Despite using limited data in the early stages, our proposed method effectively guides the search towards competitive architectures. This study highlights the efficacy of leveraging lower-fidelity estimates in NAS and paves the way for further research into accelerating the design of efficient CNN architectures.Lire moins >
Lire la suite >Deep neural networks (DNNs), particularly convolutional neural networks (CNNs), have garnered significant attention in recent years for addressing a wide range of challenges in image processing and computer vision. Neural architecture search (NAS) has emerged as a crucial field aiming to automate the design and configuration of CNN models. In this paper, we propose a novel strategy to speed up the performance estimation of neural architectures by gradually increasing the size of the training set used for evaluation as the search progresses. We evaluate this approach using the CGP-NASV2 model, a multi-objective NAS method, on the CIFAR-100 dataset. Experimental results demonstrate a notable acceleration in the search process, achieving a speedup of 4.6 times compared to the baseline. Despite using limited data in the early stages, our proposed method effectively guides the search towards competitive architectures. This study highlights the efficacy of leveraging lower-fidelity estimates in NAS and paves the way for further research into accelerating the design of efficient CNN architectures.Lire moins >
Langue :
Anglais
Comité de lecture :
Oui
Audience :
Internationale
Vulgarisation :
Non
Collections :
Source :