BIBLIOS

  Ciências References Management System

Visitor Mode (Login)
Need help?


Back

Publication details

Document type
Conference papers

Document subtype
Full paper

Title
Are Large Language Models Memorizing Bug Benchmarks?

Participants in the publication
Daniel Ramos (Author)
Claudia Mamede (Author)
Kush Jain (Author)
Paulo Canelas (Author)
Dep. Informática
Unidade de I&D e Inovação
LASIGE
Catarina Gamboa (Author)
Dep. Informática
Dep. Informática
LASIGE
Claire Le Goues (Author)

Summary
Large Language Models (LLMs) have become integral to various software engineering tasks, including code generation, bug detection, and repair. To evaluate model performance in these domains, numerous bug benchmarks containing real-world bugs from software projects have been developed. However, a growing concern within the software engineering community is that these benchmarks may not reliably reflect true LLM performance due to the risk of data leakage. Despite this concern, limited research has been conducted to quantify the impact of potential leakage. In this paper, we systematically evaluate popular LLMs to assess their susceptibility to data leakage from widely used bug benchmarks. To identify potential leakage, we use multiple metrics, including a study of benchmark membership within commonly used training datasets, as well as analyses of negative log-likelihood and n-gram accuracy. Our findings show that certain models, in particular codegen-multi, exhibit significant evidence of memorization in widely used benchmarks like Defects4J, while newer models trained on larger datasets like LLaMa 3.1 exhibit limited signs of leakage. These results highlight the need for careful benchmark selection and the adoption of robust metrics to adequately assess models capabilities.

Date of Submisson/Request
2024-11-18
Date of Acceptance
2024-12-16
Date of Publication
2025-05-03

Event
The Second International Workshop on Large Language Models for Code

Publication Identifiers

Number of pages
8


Export

APA
Daniel Ramos, Claudia Mamede, Kush Jain, Paulo Canelas, Catarina Gamboa, Claire Le Goues, (2025). Are Large Language Models Memorizing Bug Benchmarks?. The Second International Workshop on Large Language Models for Code, -

IEEE
Daniel Ramos, Claudia Mamede, Kush Jain, Paulo Canelas, Catarina Gamboa, Claire Le Goues, "Are Large Language Models Memorizing Bug Benchmarks?" in The Second International Workshop on Large Language Models for Code, , 2025, pp. -, doi:

BIBTEX
@InProceedings{62803, author = {Daniel Ramos and Claudia Mamede and Kush Jain and Paulo Canelas and Catarina Gamboa and Claire Le Goues}, title = {Are Large Language Models Memorizing Bug Benchmarks?}, booktitle = {The Second International Workshop on Large Language Models for Code}, year = 2025, pages = {-}, address = {}, publisher = {} }