Publication Type
Conference Proceeding Article
Version
publishedVersion
Publication Date
9-2024
Abstract
The rise of code pre-trained models has significantly enhanced various coding tasks, such as code completion, and tools like GitHub Copilot. However, the substantial size of these models, especially large models, poses a significant challenge when it comes to fine-tuning them for specific downstream tasks. As an alternative approach, retrieval-based methods have emerged as a promising solution, augmenting model predictions without the need for fine-tuning. Despite their potential, a significant challenge is that the designs of these methods often rely on heuristics, leaving critical questions about what information should be stored or retrieved and how to interpolate such information for augmenting predictions. To tackle this challenge, we first perform a theoretical analysis of the fine-tuning process, highlighting the importance of delta logits as a catalyst for improving model predictions. Building on this insight, we develop a novel retrieval-based method, FT2Ra, which aims to mimic genuine fine-tuning. While FT2Ra adopts a retrieval-based mechanism, it uniquely adopts a paradigm with a learning rate and multi-epoch retrievals, which is similar to fine-tuning. We conducted a comprehensive evaluation of FT2Ra in both token-level and line-level code completions. Our findings demonstrate the remarkable effectiveness of FT2Ra when compared to state-of-the-art methods and its potential to genuine fine-tuning. In token-level completion, which represents a relatively easier task, FT2Ra achieves a 4.29% improvement in accuracy compared to the best baseline method on UniXcoder. In the more challenging line-level completion task, we observe a substantial more than twice increase in Exact Match (EM) performance, indicating the significant advantages of our theoretical analysis. Notably, even when operating without actual fine-tuning, FT2Ra exhibits competitive performance compared to the models with real fine-tuning
Keywords
Code completions, Critical questions, Down-stream, Fine tuning, Language model, Large models, Model prediction, Ode completion, Retrieval-augmented language model
Discipline
Databases and Information Systems | Theory and Algorithms
Research Areas
Data Science and Engineering; Information Systems and Management
Publication
Proceedings of the 33rd ACM SIGSOFT International Symposium on Software Testing and Analysis, Vienna, Austria, 2024 September 16-20
First Page
313
Last Page
324
ISBN
9798400706127
Identifier
10.1145/3650212.3652130
Publisher
ACM
City or Country
New York
Citation
GUO, Qi; LIU, Shangqing; XIE, Xiaofei; and TANG, Ze Tang.
FT2Ra: A fine-tuning-inspired approach to retrieval-augmented code completion. (2024). Proceedings of the 33rd ACM SIGSOFT International Symposium on Software Testing and Analysis, Vienna, Austria, 2024 September 16-20. 313-324.
Available at: https://ink.library.smu.edu.sg/sis_research/9444
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://doi.org/10.1145/3650212.3652130