Leveraging Large Language Model for automatic patch correctness assessment
Publication Type
Journal Article
Publication Date
11-2024
Abstract
Automated Program Repair (APR) techniques have shown more and more promising results in fixing real-world bugs. Despite the effectiveness, APR techniques still face an overfitting problem: a generated patch can be incorrect although it passes all tests. It is time-consuming to manually evaluate the correctness of generated patches that can pass all available test cases. To address this problem, many approaches have been proposed to automatically assess the correctness of patches generated by APR techniques. These approaches are mainly evaluated within the cross-validation setting. However, for patches generated by a new or unseen APR tool, users are implicitly required to manually label a significant portion of these patches (e.g., 90% in 10-fold cross-validation) in the cross-validation setting before inferring the remaining patches (e.g., 10% in 10-fold cross-validation). To mitigate the issue, in this study, we propose LLM4PatchCorrect, the patch correctness assessment by adopting a large language model for code. Specifically, for patches generated by a new or unseen APR tool, LLM4PatchCorrect does not need labeled patches of this new or unseen APR tool for training but directly queries the large language model for code to get predictions on the correctness labels without training. In this way, LLM4PatchCorrect can reduce the manual labeling effort when building a model to automatically assess the correctness of generated patches of new APR tools. To provide knowledge regarding the automatic patch correctness assessment (APCA) task to the large language model for code, LLM4PatchCorrect leverages bug descriptions, execution traces, failing test cases, test coverage, and labeled patches generated by existing APR tools, before deciding the correctness of the unlabeled patches of a new or unseen APR tool. Additionally, LLM4PatchCorrect prioritizes labeled patches from existing APR tools that exhibit semantic similarity to those generated by new APR tools, enhancing the accuracy achieved by LLM4PatchCorrect for patches from new APR tools. Our experimental results showed that LLM4PatchCorrect can achieve an accuracy of 84.4% and an F1-score of 86.5% on average although no labeled patch of the new or unseen APR tool is available. In addition, our proposed technique significantly outperformed the prior state-of-the-art.
Keywords
Automatic patch correctness assessment, Large Language Models of code, In-context learning, Automated program repair
Discipline
Artificial Intelligence and Robotics | Computer Sciences
Research Areas
Intelligent Systems and Optimization; Data Science and Engineering
Publication
IEEE Transactions on Software Engineering
Volume
50
Issue
11
First Page
2865
Last Page
2883
ISSN
0098-5589
Identifier
10.1109/TSE.2024.3452252
Publisher
Institute of Electrical and Electronics Engineers
Citation
ZHOU, Xin; XU, Bowen; KIM, Kisub; HAN, DongGyun; NGUYEN, Hung Huu; LE-CONG, Thanh; HE, Junda; LE, Bach; and David LO.
Leveraging Large Language Model for automatic patch correctness assessment. (2024). IEEE Transactions on Software Engineering. 50, (11), 2865-2883.
Available at: https://ink.library.smu.edu.sg/sis_research/9917
Additional URL
https://doi.org/10.1109/TSE.2024.3452252