A Performance Study of LLM-Generated Code ...
Type de document :
Autre communication scientifique (congrès sans actes - poster - séminaire...): Communication dans un congrès avec actes
Titre :
A Performance Study of LLM-Generated Code on Leetcode
Auteur(s) :
Coignion, Tristan [Auteur]
Self-adaptation for distributed services and large software systems [SPIRALS]
Quinton, Clément [Auteur]
Self-adaptation for distributed services and large software systems [SPIRALS]
Rouvoy, Romain [Auteur]
Self-adaptation for distributed services and large software systems [SPIRALS]
Self-adaptation for distributed services and large software systems [SPIRALS]
Quinton, Clément [Auteur]
Self-adaptation for distributed services and large software systems [SPIRALS]
Rouvoy, Romain [Auteur]
Self-adaptation for distributed services and large software systems [SPIRALS]
Titre de la manifestation scientifique :
EASE'24 - 28th International Conference on Evaluation and Assessment in Software Engineering
Ville :
Salerno
Pays :
Italie
Date de début de la manifestation scientifique :
2024-06-18
Mot(s)-clé(s) en anglais :
LLM
Large Language Model
Leetcode
Performance
Python
Large Language Model
Leetcode
Performance
Python
Discipline(s) HAL :
Informatique [cs]/Génie logiciel [cs.SE]
Informatique [cs]
Informatique [cs]
Résumé en anglais : [en]
This study evaluates the efficiency of code generation by Large Language Models (LLMs) and measures their performance against human-crafted solutions using a dataset from Leetcode. We compare 18 LLMs, considering factors ...
Lire la suite >This study evaluates the efficiency of code generation by Large Language Models (LLMs) and measures their performance against human-crafted solutions using a dataset from Leetcode. We compare 18 LLMs, considering factors such as model temperature and success rate, and their impact on code performance. This research introduces a novel method for measuring and comparing the speed of LLM-generated code, revealing that LLMs produce code with comparable performance, irrespective of the adopted LLM. We also find that LLMs are capable of generating code that is, on average, more efficient than the code written by humans. The paper further discusses the use of Leetcode as a benchmarking dataset, the limita- tions imposed by potential data contamination, and the platform’s measurement reliability. We believe that our findings contribute to a better understanding of LLM capabilities in code generation and set the stage for future optimizations in the field.Lire moins >
Lire la suite >This study evaluates the efficiency of code generation by Large Language Models (LLMs) and measures their performance against human-crafted solutions using a dataset from Leetcode. We compare 18 LLMs, considering factors such as model temperature and success rate, and their impact on code performance. This research introduces a novel method for measuring and comparing the speed of LLM-generated code, revealing that LLMs produce code with comparable performance, irrespective of the adopted LLM. We also find that LLMs are capable of generating code that is, on average, more efficient than the code written by humans. The paper further discusses the use of Leetcode as a benchmarking dataset, the limita- tions imposed by potential data contamination, and the platform’s measurement reliability. We believe that our findings contribute to a better understanding of LLM capabilities in code generation and set the stage for future optimizations in the field.Lire moins >
Langue :
Anglais
Comité de lecture :
Oui
Audience :
Internationale
Vulgarisation :
Non
Projet ANR :
Collections :
Source :
Fichiers
- document
- Accès libre
- Accéder au document
- Tristan___LLM_Code_Performance_Study.pdf
- Accès libre
- Accéder au document