Publication Type
Conference Proceeding Article
Version
publishedVersion
Publication Date
4-2024
Abstract
Large language models of code have shown remarkable effectiveness across various software engineering tasks. Despite the availability of many cloud services built upon these powerful models, there remain several scenarios where developers cannot take full advantage of them, stemming from factors such as restricted or unreliable internet access, institutional privacy policies that prohibit external transmission of code to third-party vendors, and more. Therefore, developing a compact, efficient, and yet energy-saving model for deployment on developers' devices becomes essential.To this aim, we propose Avatar, a novel approach that crafts a deployable model from a large language model of code by optimizing it in terms of model size, inference latency, energy consumption, and carbon footprint while maintaining a comparable level of effectiveness (e.g., prediction accuracy on downstream tasks). The key idea of Avatar is to formulate the optimization of language models as a multi-objective configuration tuning problem and solve it with the help of a Satisfiability Modulo Theories (SMT) solver and a tailored optimization algorithm. The SMT solver is used to form an appropriate configuration space, while the optimization algorithm identifies the Pareto-optimal set of configurations for training the optimized models using knowledge distillation. We evaluate Avatar with two popular language models of code, i.e., CodeBERT and GraphCodeBERT, on two popular tasks, i.e., vulnerability prediction and clone detection. We use Avatar to produce optimized models with a small size (3 MB), which is 160× smaller than the original large models. On the two tasks, the optimized models significantly reduce the energy consumption (up to 184× less), carbon footprint (up to 157× less), and inference latency (up to 76× faster), with only a negligible loss in effectiveness (1.67%).
Keywords
Language Models of Code, Configuration Tuning, Multi-Objective Optimization
Discipline
Programming Languages and Compilers | Software Engineering
Research Areas
Software and Cyber-Physical Systems
Publication
ICSE-SEIS'24: Proceedings of the 46th International Conference on Software Engineering: Software Engineering in Society, Lisbon, Portugal, April 14-20
First Page
142
Last Page
153
ISBN
9798400704994
Identifier
10.1145/3639475.3640097
Publisher
ACM
City or Country
New York
Citation
SHI, Jieke; YANG, Zhou; KANG, Hong Jin; XU, Bowen; HE, Junda; and LO, David.
Greening large language models of code. (2024). ICSE-SEIS'24: Proceedings of the 46th International Conference on Software Engineering: Software Engineering in Society, Lisbon, Portugal, April 14-20. 142-153.
Available at: https://ink.library.smu.edu.sg/sis_research/9249
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://doi.org/10.1145/3639475.3640097