Publication Type
Journal Article
Version
publishedVersion
Publication Date
7-2025
Abstract
Developers deal with code-change-related tasks daily, e.g., reviewing code. Pre-trained code and code-change-oriented models have been adapted to help developers with such tasks. Recently, large language models (LLMs) have shown their effectiveness in code-related tasks. However, existing LLMs for code focus on general code syntax and semantics rather than the differences between two code versions. Thus, it is an open question how LLMs perform on code-change-related tasks.To answer this question, we conduct an empirical study using 1B parameters LLMs on three code-change-related tasks, i.e., code review generation, commit message generation, and just-in-time comment update, with in-context learning (ICL) and parameter-efficient fine-tuning (PEFT, including LoRA and prefix-tuning). We observe that the performance of LLMs is poor without examples and generally improves with examples, but more examples do not always lead to better performance. LLMs tuned with LoRA have comparable performance to the state-of-the-art small pre-trained models. Larger models are not always better, but Llama 2 and Code Llama families are always the best. The best LLMs outperform small pre-trained models on the code changes that only modify comments and perform comparably on other code changes. We suggest future work should focus more on guiding LLMs to learn the knowledge specific to the changes related to code rather than comments for code-change-related tasks.
Keywords
Code-change-related task, large language model, empirical study
Discipline
Software Engineering
Research Areas
Software and Cyber-Physical Systems
Areas of Excellence
Digital transformation
Publication
ACM Transactions on Software Engineering and Methodology
Volume
34
Issue
6
First Page
1
Last Page
36
ISSN
1049-331X
Identifier
10.1145/3709358
Publisher
Association for Computing Machinery (ACM)
Citation
FAN, Lishui; LIU, Jiakun; LIU, Zhongxin; LO, David; XIA, Xin; and LI, Shanping.
Exploring the capabilities of LLMs for code-change-related tasks. (2025). ACM Transactions on Software Engineering and Methodology. 34, (6), 1-36.
Available at: https://ink.library.smu.edu.sg/sis_research/10939
Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://doi.org/10.1145/3709358