Publication Type
Journal Article
Version
acceptedVersion
Publication Date
1-2022
Abstract
Context: Deep learning (DL) techniques have gained significant popularity among software engineering (SE) researchers in recent years. This is because they can often solve many SE challenges without enormous manual feature engineering effort and complex domain knowledge.Objective: Although many DL studies have reported substantial advantages over other state-of-the-art models on effectiveness, they often ignore two factors: (1) reproducibility—whether the reported experimental results can be obtained by other researchers using authors’ artifacts (i.e., source code and datasets) with the same experimental setup; and (2) replicability—whether the reported experimental result can be obtained by other researchers using their re-implemented artifacts with a different experimental setup. We observed that DL studies commonly overlook these two factors and declare them as minor threats or leave them for future work. This is mainly due to high model complexity with many manually set parameters and the time-consuming optimization process, unlike classical supervised machine learning (ML) methods (e.g., random forest). This study aims to investigate the urgency and importance of reproducibility and replicability for DL studies on SE tasks.Method: In this study, we conducted a literature review on 147 DL studies recently published in 20 SE venues and 20 AI (Artificial Intelligence) venues to investigate these issues. We also re-ran four representative DL models in SE to investigate important factors that may strongly affect the reproducibility and replicability of a study.Results: Our statistics show the urgency of investigating these two factors in SE, where only 10.2% of the studies investigate any research question to show that their models can address at least one issue of replicability and/or reproducibility. More than 62.6% of the studies do not even share high-quality source code or complete data to support the reproducibility of their complex models. Meanwhile, our experimental results show the importance of reproducibility and replicability, where the reported performance of a DL model could not be reproduced for an unstable optimization process. Replicability could be substantially compromised if the model training is not convergent, or if performance is sensitive to the size of vocabulary and testing data.Conclusion: It is urgent for the SE community to provide a long-lasting link to a high-quality reproduction package, enhance DL-based solution stability and convergence, and avoid performance sensitivity on different sampled data.
Keywords
Deep Learning, Replicability, Reproducibility, Software Engineering
Discipline
Databases and Information Systems | Software Engineering
Research Areas
Data Science and Engineering; Cybersecurity; Intelligent Systems and Optimization; Software and Cyber-Physical Systems
Publication
ACM Transactions on Software Engineering and Methodology
Volume
31
Issue
1
First Page
1
Last Page
46
ISSN
1049-331X
Identifier
10.1145/3477535
Publisher
Association for Computing Machinery (ACM)
Citation
LIU, Chao; GAO, Cuiyun; XIA, Xin; LO, David; GRUNDY, John C.; and YANG, Xiaohu.
On the reproducibility and replicability of deep learning in software engineering. (2022). ACM Transactions on Software Engineering and Methodology. 31, (1), 1-46.
Available at: https://ink.library.smu.edu.sg/sis_research/7629
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://doi.org/10.1145/3477535