Realizing self-adaptive systems via online ...
Type de document :
Article dans une revue scientifique: Article original
Titre :
Realizing self-adaptive systems via online reinforcement learning and feature-model-guided exploration
Auteur(s) :
Metzger, Andreas [Auteur]
Universität Duisburg-Essen = University of Duisburg-Essen [Essen]
PALUNO - The Ruhr Institute for Software Technology [PALUNO]
Quinton, Clément [Auteur]
Self-adaptation for distributed services and large software systems [SPIRALS]
Mann, Zoltán [Auteur]
Universität Duisburg-Essen = University of Duisburg-Essen [Essen]
Baresi, Luciano [Auteur]
Politecnico di Milano [Milan] [POLIMI]
Pohl, Klaus [Auteur]
PALUNO - The Ruhr Institute for Software Technology [PALUNO]
Universität Duisburg-Essen = University of Duisburg-Essen [Essen]
PALUNO - The Ruhr Institute for Software Technology [PALUNO]
Quinton, Clément [Auteur]

Self-adaptation for distributed services and large software systems [SPIRALS]
Mann, Zoltán [Auteur]
Universität Duisburg-Essen = University of Duisburg-Essen [Essen]
Baresi, Luciano [Auteur]
Politecnico di Milano [Milan] [POLIMI]
Pohl, Klaus [Auteur]
PALUNO - The Ruhr Institute for Software Technology [PALUNO]
Titre de la revue :
Computing
Éditeur :
Springer Verlag
Date de publication :
2022-03-01
ISSN :
0010-485X
Mot(s)-clé(s) en anglais :
Evolution Mathematics
Adaptive system
Reinforcement learning
Feature model
Adaptive system
Reinforcement learning
Feature model
Discipline(s) HAL :
Informatique [cs]/Génie logiciel [cs.SE]
Résumé en anglais : [en]
A self-adaptive system can automatically maintain its quality requirements in the presence of dynamic environment changes. Developing a self-adaptive system may be difficult due to design time uncertainty; e.g., anticipating ...
Lire la suite >A self-adaptive system can automatically maintain its quality requirements in the presence of dynamic environment changes. Developing a self-adaptive system may be difficult due to design time uncertainty; e.g., anticipating all potential environment changes at design time is in most cases infeasible. To realize self-adaptive systems in the presence of design time uncertainty, online machine learning, i.e., machine learning at runtime, is increasingly used. In particular, online reinforcement learning is proposed, which learns suitable adaptation actions through interactions with the environment at runtime. To learn about its environment, online reinforcement learning has to select actions that were not selected before, which is known as exploration. How exploration happens impacts the performance of the learning process. We focus on two problems related to how adaptation actions are explored. First, existing solutions randomly explore adaptation actions and thus may exhibit slow learning if there are many possible adaptation actions. Second, they are unaware of system evolution, and thus may explore new adaptation actions introduced during evolution rather late. We propose novel exploration strategies that use feature models (from software product line engineering) to guide exploration in the presence of many adaptation actions and system evolution. Experimental results for two realistic self-adaptive systems indicate an average speed-up of the learning process of 33.7% in the presence of many adaptation actions, and of 50.6% in the presence of evolution. Research leading to these results received funding from the EU Horizon 2020 programme under Grant Agreements Nos. 780351 (ENACT) and 871525 (FogProtect), and ANR-19-CE25-0003-01 KOALA project.Lire moins >
Lire la suite >A self-adaptive system can automatically maintain its quality requirements in the presence of dynamic environment changes. Developing a self-adaptive system may be difficult due to design time uncertainty; e.g., anticipating all potential environment changes at design time is in most cases infeasible. To realize self-adaptive systems in the presence of design time uncertainty, online machine learning, i.e., machine learning at runtime, is increasingly used. In particular, online reinforcement learning is proposed, which learns suitable adaptation actions through interactions with the environment at runtime. To learn about its environment, online reinforcement learning has to select actions that were not selected before, which is known as exploration. How exploration happens impacts the performance of the learning process. We focus on two problems related to how adaptation actions are explored. First, existing solutions randomly explore adaptation actions and thus may exhibit slow learning if there are many possible adaptation actions. Second, they are unaware of system evolution, and thus may explore new adaptation actions introduced during evolution rather late. We propose novel exploration strategies that use feature models (from software product line engineering) to guide exploration in the presence of many adaptation actions and system evolution. Experimental results for two realistic self-adaptive systems indicate an average speed-up of the learning process of 33.7% in the presence of many adaptation actions, and of 50.6% in the presence of evolution. Research leading to these results received funding from the EU Horizon 2020 programme under Grant Agreements Nos. 780351 (ENACT) and 871525 (FogProtect), and ANR-19-CE25-0003-01 KOALA project.Lire moins >
Langue :
Anglais
Comité de lecture :
Oui
Audience :
Internationale
Vulgarisation :
Non
Projet ANR :
Collections :
Source :
Fichiers
- document
- Accès libre
- Accéder au document
- Metzger2022_Article_RealizingSelf-adaptiveSystemsV.pdf
- Accès libre
- Accéder au document
- s00607-022-01052-x.pdf
- Accès libre
- Accéder au document
- document
- Accès libre
- Accéder au document
- Metzger2022_Article_RealizingSelf-adaptiveSystemsV.pdf
- Accès libre
- Accéder au document