Transparent Memory Optimization using Slots
Type de document :
Autre communication scientifique (congrès sans actes - poster - séminaire...): Communication dans un congrès avec actes
Titre :
Transparent Memory Optimization using Slots
Auteur(s) :
Tesone, Pablo [Auteur]
Centre for Digital Systems [CERI SN - IMT Nord Europe]
Analyses and Languages Constructs for Object-Oriented Application Evolution [RMOD]
Bragagnolo, Santiago [Auteur]
Inria Lille - Nord Europe
Ducasse, Stephane [Auteur]
Analyses and Languages Constructs for Object-Oriented Application Evolution [RMOD]
Denker, Marcus [Auteur]
Analyses and Languages Constructs for Object-Oriented Application Evolution [RMOD]
Centre for Digital Systems [CERI SN - IMT Nord Europe]
Analyses and Languages Constructs for Object-Oriented Application Evolution [RMOD]
Bragagnolo, Santiago [Auteur]
Inria Lille - Nord Europe
Ducasse, Stephane [Auteur]
Analyses and Languages Constructs for Object-Oriented Application Evolution [RMOD]
Denker, Marcus [Auteur]
Analyses and Languages Constructs for Object-Oriented Application Evolution [RMOD]
Titre de la manifestation scientifique :
International Workshop on Smalltalk Technologies 2018
Ville :
Cagliari
Pays :
Italie
Date de début de la manifestation scientifique :
2018-09-10
Mot(s)-clé(s) en anglais :
Memory consumption
optimization
first class instance variables
Object-oriented programming languages
optimization
first class instance variables
Object-oriented programming languages
Discipline(s) HAL :
Informatique [cs]/Langage de programmation [cs.PL]
Résumé en anglais : [en]
Memory size limits the number of instances available in memory at a single time. This limitation affects applications that require large graphs of objects. Moose is an example of such applications. Moose is a tool used in ...
Lire la suite >Memory size limits the number of instances available in memory at a single time. This limitation affects applications that require large graphs of objects. Moose is an example of such applications. Moose is a tool used in software analysis. It parses and models software code in an object graph. Later it performs multiple operations (i.e., measuring, querying, mining and analysing the code) on such models. However, not all the information in the model is present, as the model is intended to be used with different applications and programming languages (not all applications or programming languages uses the same elements). Analysis of different models shows that between 30 and 50% of memory is wasted. Analysis models produced in an industrial context reveals that models composed of several millions of instances used up to 2Gb memory. In this work, we propose new slots and their combination to transparently optimize memory consumption: NilAwareS-lot optimizes automatically nils and LazyInitializationSlot handles the case where an empty collection is required and use by many clients. We show that performing a limited amount of changes, we improved the memory footprint of Moose models in around 30%. We also show that our solution has comparable performance with an ad hoc solution, but without the need for boilerplate code. To implement this solution, we leverage the existing Pharo support of slots, write barriers and efficient forwarders.Lire moins >
Lire la suite >Memory size limits the number of instances available in memory at a single time. This limitation affects applications that require large graphs of objects. Moose is an example of such applications. Moose is a tool used in software analysis. It parses and models software code in an object graph. Later it performs multiple operations (i.e., measuring, querying, mining and analysing the code) on such models. However, not all the information in the model is present, as the model is intended to be used with different applications and programming languages (not all applications or programming languages uses the same elements). Analysis of different models shows that between 30 and 50% of memory is wasted. Analysis models produced in an industrial context reveals that models composed of several millions of instances used up to 2Gb memory. In this work, we propose new slots and their combination to transparently optimize memory consumption: NilAwareS-lot optimizes automatically nils and LazyInitializationSlot handles the case where an empty collection is required and use by many clients. We show that performing a limited amount of changes, we improved the memory footprint of Moose models in around 30%. We also show that our solution has comparable performance with an ad hoc solution, but without the need for boilerplate code. To implement this solution, we leverage the existing Pharo support of slots, write barriers and efficient forwarders.Lire moins >
Langue :
Anglais
Comité de lecture :
Oui
Audience :
Internationale
Vulgarisation :
Non
Collections :
Source :
Fichiers
- https://hal.archives-ouvertes.fr/hal-02565748/document
- Accès libre
- Accéder au document
- https://hal.archives-ouvertes.fr/hal-02565748/document
- Accès libre
- Accéder au document
- https://hal.archives-ouvertes.fr/hal-02565748/document
- Accès libre
- Accéder au document
- main.pdf
- Accès libre
- Accéder au document
- document
- Accès libre
- Accéder au document
- document
- Accès libre
- Accéder au document
- main.pdf
- Accès libre
- Accéder au document