Feature Model-Guided Online Reinforcement ...
Type de document :
Communication dans un congrès avec actes
Titre :
Feature Model-Guided Online Reinforcement Learning for Self-Adaptive Services
Auteur(s) :
Metzger, Andreas [Auteur]
PALUNO - The Ruhr Institute for Software Technology [PALUNO]
Quinton, Clément [Auteur]
Self-adaptation for distributed services and large software systems [SPIRALS]
Université de Lille
Adam-Mann, Zoltan [Auteur]
PALUNO - The Ruhr Institute for Software Technology [PALUNO]
Baresi, Luciano [Auteur]
Politecnico di Milano [Milan] [POLIMI]
Pohl, Klaus [Auteur]
PALUNO - The Ruhr Institute for Software Technology [PALUNO]
PALUNO - The Ruhr Institute for Software Technology [PALUNO]
Quinton, Clément [Auteur]

Self-adaptation for distributed services and large software systems [SPIRALS]
Université de Lille
Adam-Mann, Zoltan [Auteur]
PALUNO - The Ruhr Institute for Software Technology [PALUNO]
Baresi, Luciano [Auteur]
Politecnico di Milano [Milan] [POLIMI]
Pohl, Klaus [Auteur]
PALUNO - The Ruhr Institute for Software Technology [PALUNO]
Titre de la manifestation scientifique :
ICSOC'20: International Conference on Service Oriented Computing
Ville :
Dubai
Pays :
Émirats arabes unis
Date de début de la manifestation scientifique :
2020-12-14
Titre de la revue :
Proceedings of the 18th International Conference on Service-Oriented Computing
Mot(s)-clé(s) en anglais :
Adaptation
Reinforcement learning
Feature model
Cloud service
Reinforcement learning
Feature model
Cloud service
Discipline(s) HAL :
Informatique [cs]/Génie logiciel [cs.SE]
Résumé en anglais : [en]
A self-adaptive service can maintain its QoS requirements in the presence of dynamic environment changes. To develop a self-adaptive service, service engineers have to create self-adaptation logic encoding when the service ...
Lire la suite >A self-adaptive service can maintain its QoS requirements in the presence of dynamic environment changes. To develop a self-adaptive service, service engineers have to create self-adaptation logic encoding when the service should execute which adaptation actions. However, developing self-adaptation logic may be difficult due to design time uncertainty ; e.g., anticipating all potential environment changes at design time is in most cases infeasible. Online reinforcement learning addresses design time uncertainty by learning suitable adaptation actions through interactions with the environment at runtime. To learn more about its environment, reinforcement learning has to select actions that were not selected before, which is known as exploration. How exploration happens has an impact on the performance of the learning process. We focus on two problems related to how a service's adaptation actions are explored: (1) Existing solutions randomly explore adaptation actions and thus may exhibit slow learning if there are many possible adaptation actions to choose from. (2) Existing solutions are unaware of service evolution, and thus may explore new adaptation actions introduced during such evolution rather late. We propose novel exploration strategies that use feature models (from software product line engineering) to guide exploration in the presence of many adaptation actions and in the presence of service evolution. Experimental results for a self-adaptive cloud management service indicate an average speed-up of the learning process of 58.8% in the presence of many adaptation actions, and of 61.3% in the presence of service evolution. The improved learning performance in turn led to an average QoS improvement of 7.8% and 23.7% respectively.Lire moins >
Lire la suite >A self-adaptive service can maintain its QoS requirements in the presence of dynamic environment changes. To develop a self-adaptive service, service engineers have to create self-adaptation logic encoding when the service should execute which adaptation actions. However, developing self-adaptation logic may be difficult due to design time uncertainty ; e.g., anticipating all potential environment changes at design time is in most cases infeasible. Online reinforcement learning addresses design time uncertainty by learning suitable adaptation actions through interactions with the environment at runtime. To learn more about its environment, reinforcement learning has to select actions that were not selected before, which is known as exploration. How exploration happens has an impact on the performance of the learning process. We focus on two problems related to how a service's adaptation actions are explored: (1) Existing solutions randomly explore adaptation actions and thus may exhibit slow learning if there are many possible adaptation actions to choose from. (2) Existing solutions are unaware of service evolution, and thus may explore new adaptation actions introduced during such evolution rather late. We propose novel exploration strategies that use feature models (from software product line engineering) to guide exploration in the presence of many adaptation actions and in the presence of service evolution. Experimental results for a self-adaptive cloud management service indicate an average speed-up of the learning process of 58.8% in the presence of many adaptation actions, and of 61.3% in the presence of service evolution. The improved learning performance in turn led to an average QoS improvement of 7.8% and 23.7% respectively.Lire moins >
Langue :
Anglais
Comité de lecture :
Oui
Audience :
Internationale
Vulgarisation :
Non
Projet ANR :
Collections :
Source :
Fichiers
- https://hal.archives-ouvertes.fr/hal-02982029v2/document
- Accès libre
- Accéder au document
- https://hal.archives-ouvertes.fr/hal-02982029v2/document
- Accès libre
- Accéder au document
- https://hal.archives-ouvertes.fr/hal-02982029v2/document
- Accès libre
- Accéder au document
- document
- Accès libre
- Accéder au document
- ICSOC2020.pdf
- Accès libre
- Accéder au document