Publication Type
Conference Proceeding Article
Version
publishedVersion
Publication Date
4-2015
Abstract
Context: Clone benchmarks are essential to the assessment and improvement of clone detection tools and algorithms. Among existing benchmarks, Bellon's benchmark is widely used by the research community. However, a serious threat to the validity of this benchmark is that reference clones it contains have been manually validated by Bellon alone. Other persons may disagree with Bellon's judgment. Objective: In this paper, we perform an empirical assessment of Bellon's benchmark. Method: We seek the opinion of eighteen participants on a subset of Bellon's benchmark to determine if researchers should trust the reference clones it contains. Results: Our experiment shows that a significant amount of the reference clones are debatable, and this phenomenon can introduce noise in results obtained using this benchmark.
Keywords
Code clone, Empirical study, Software metrics
Discipline
Software Engineering
Research Areas
Software and Cyber-Physical Systems
Publication
EASE '15: Proceedings of the 19th International Conference on Evaluation and Assessment in Software Engineering, April 29
First Page
1
Last Page
10
ISBN
9781450333504
Identifier
10.1145/2745802.2745821
Publisher
ACM
City or Country
New York
Citation
CHARPENTIER, Alan; FALLERI, Jean-Rémy; LO, David; and REVEILLERE, Laurent.
An Empirical Assessment of Bellon's Clone Benchmark. (2015). EASE '15: Proceedings of the 19th International Conference on Evaluation and Assessment in Software Engineering, April 29. 1-10.
Available at: https://ink.library.smu.edu.sg/sis_research/3092
Copyright Owner and License
Publisher
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://doi.org/10.1145/2745802.2745821