Transparent Memory Optimization using Slots
Document type :
Autre communication scientifique (congrès sans actes - poster - séminaire...): Communication dans un congrès avec actes
Title :
Transparent Memory Optimization using Slots
Author(s) :
Tesone, Pablo [Auteur]
Centre for Digital Systems [CERI SN - IMT Nord Europe]
Analyses and Languages Constructs for Object-Oriented Application Evolution [RMOD]
Bragagnolo, Santiago [Auteur]
Inria Lille - Nord Europe
Ducasse, Stephane [Auteur]
Analyses and Languages Constructs for Object-Oriented Application Evolution [RMOD]
Denker, Marcus [Auteur]
Analyses and Languages Constructs for Object-Oriented Application Evolution [RMOD]
Centre for Digital Systems [CERI SN - IMT Nord Europe]
Analyses and Languages Constructs for Object-Oriented Application Evolution [RMOD]
Bragagnolo, Santiago [Auteur]
Inria Lille - Nord Europe
Ducasse, Stephane [Auteur]
Analyses and Languages Constructs for Object-Oriented Application Evolution [RMOD]
Denker, Marcus [Auteur]
Analyses and Languages Constructs for Object-Oriented Application Evolution [RMOD]
Conference title :
International Workshop on Smalltalk Technologies 2018
City :
Cagliari
Country :
Italie
Start date of the conference :
2018-09-10
English keyword(s) :
Memory consumption
optimization
first class instance variables
Object-oriented programming languages
optimization
first class instance variables
Object-oriented programming languages
HAL domain(s) :
Informatique [cs]/Langage de programmation [cs.PL]
English abstract : [en]
Memory size limits the number of instances available in memory at a single time. This limitation affects applications that require large graphs of objects. Moose is an example of such applications. Moose is a tool used in ...
Show more >Memory size limits the number of instances available in memory at a single time. This limitation affects applications that require large graphs of objects. Moose is an example of such applications. Moose is a tool used in software analysis. It parses and models software code in an object graph. Later it performs multiple operations (i.e., measuring, querying, mining and analysing the code) on such models. However, not all the information in the model is present, as the model is intended to be used with different applications and programming languages (not all applications or programming languages uses the same elements). Analysis of different models shows that between 30 and 50% of memory is wasted. Analysis models produced in an industrial context reveals that models composed of several millions of instances used up to 2Gb memory. In this work, we propose new slots and their combination to transparently optimize memory consumption: NilAwareS-lot optimizes automatically nils and LazyInitializationSlot handles the case where an empty collection is required and use by many clients. We show that performing a limited amount of changes, we improved the memory footprint of Moose models in around 30%. We also show that our solution has comparable performance with an ad hoc solution, but without the need for boilerplate code. To implement this solution, we leverage the existing Pharo support of slots, write barriers and efficient forwarders.Show less >
Show more >Memory size limits the number of instances available in memory at a single time. This limitation affects applications that require large graphs of objects. Moose is an example of such applications. Moose is a tool used in software analysis. It parses and models software code in an object graph. Later it performs multiple operations (i.e., measuring, querying, mining and analysing the code) on such models. However, not all the information in the model is present, as the model is intended to be used with different applications and programming languages (not all applications or programming languages uses the same elements). Analysis of different models shows that between 30 and 50% of memory is wasted. Analysis models produced in an industrial context reveals that models composed of several millions of instances used up to 2Gb memory. In this work, we propose new slots and their combination to transparently optimize memory consumption: NilAwareS-lot optimizes automatically nils and LazyInitializationSlot handles the case where an empty collection is required and use by many clients. We show that performing a limited amount of changes, we improved the memory footprint of Moose models in around 30%. We also show that our solution has comparable performance with an ad hoc solution, but without the need for boilerplate code. To implement this solution, we leverage the existing Pharo support of slots, write barriers and efficient forwarders.Show less >
Language :
Anglais
Peer reviewed article :
Oui
Audience :
Internationale
Popular science :
Non
Collections :
Source :
Files
- https://hal.archives-ouvertes.fr/hal-02565748/document
- Open access
- Access the document
- https://hal.archives-ouvertes.fr/hal-02565748/document
- Open access
- Access the document
- https://hal.archives-ouvertes.fr/hal-02565748/document
- Open access
- Access the document
- main.pdf
- Open access
- Access the document
- document
- Open access
- Access the document
- document
- Open access
- Access the document
- main.pdf
- Open access
- Access the document