Publication Type
Journal Article
Version
publishedVersion
Publication Date
9-2025
Abstract
Large language models (LLMs) demonstrate impressive capabilities to generate accurate code snippets given natural language intents in a zero-shot manner, i.e., without the need for specific fine-tuning. While prior studies have highlighted the advantages of fine-tuning LLMs, this process incurs high computational costs, making it impractical in resource-scarce environments, particularly for models with billions of parameters. To address these challenges, previous research explored in-context learning (ICL) and retrieval-augmented generation (RAG) as strategies to guide the LLM generative process with task-specific prompt examples. However, ICL and RAG introduce inconveniences, such as the need for designing contextually relevant prompts and the absence of learning task-specific parameters, thereby limiting downstream task performance. In this context, we foresee parameter-efficient fine-tuning (PEFT) as a promising approach to efficiently specialize LLMs to task-specific data while maintaining reasonable resource consumption. In this article, we deliver a comprehensive study of PEFT techniques for LLMs in the context of automated code generation. Our comprehensive investigation of PEFT techniques for LLMs reveals their superiority and potential over ICL and RAG across a diverse set of LLMs and three representative Python code generation datasets: Conala, CodeAlpacaPy, and APPS. Furthermore, our study highlights the potential for tuning larger LLMs and significant reductions in memory usage by combining PEFT with quantization. Therefore, this study opens opportunities for broader applications of PEFT in software engineering scenarios.
Keywords
code generation, large language models, parameter-efficient fine-tuning, quantization, retrieval-augmented generation, empirical study
Discipline
Programming Languages and Compilers | Software Engineering
Research Areas
Software and Cyber-Physical Systems
Areas of Excellence
Digital transformation
Publication
ACM Transactions on Software Engineering and Methodology
Volume
34
Issue
7
First Page
1
Last Page
25
ISSN
1049-331X
Identifier
10.1145/3714461
Publisher
Association for Computing Machinery (ACM)
Citation
WEYSSOW, Martin; ZHOU, Xin; KIM, Kisub; LO, David; and SAHRAOUI, Houari A..
Exploring parameter-efficient fine-tuning techniques for code generation with large language models. (2025). ACM Transactions on Software Engineering and Methodology. 34, (7), 1-25.
Available at: https://ink.library.smu.edu.sg/sis_research/10952
Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://doi.org/10.1145/3714461