CoSec : On-the-Fly security hardening of code LLMs via supervised co-decoding
Publication Type
Conference Proceeding Article
Publication Date
9-2024
Abstract
Large Language Models (LLMs) specialized in code have shown exceptional proficiency across various programming-related tasks, particularly code generation. Nonetheless, due to its nature of pretraining on massive uncritically filtered data, prior studies have shown that code LLMs are prone to generate code with potential vulnerabilities. Existing approaches to mitigate this risk involve crafting data without vulnerability and subsequently retraining or fine-tuning the model. As the number of parameters exceeds a billion, the computation and data demands of the above approaches will be enormous. Moreover, an increasing number of code LLMs tend to be distributed as services, where the internal representation is not accessible, and the API is the only way to reach the LLM, making the prior mitigation strategies non-applicable. To cope with this, we propose CoSec, an on-the-fly Security hardening method of code LLMs based on security model-guided Co-decoding, to reduce the likelihood of code LLMs to generate code containing vulnerabilities. Our key idea is to train a separate but much smaller security model to co-decode with a target code LLM. Since the trained secure model has higher confidence for secure tokens, it guides the generation of the target base model towards more secure code generation. By adjusting the probability distributions of tokens during each step of the decoding process, our approach effectively influences the tendencies of generation without accessing the internal parameters of the target code LLM. We have conducted extensive experiments across various parameters in multiple code LLMs (i.e., CodeGen, StarCoder, and DeepSeek-Coder), and the results show that our approach is effective in security hardening. Specifically, our approach improves the average security ratio of six base models by 5.02%-37.14%, while maintaining the functional correctness of the target model.
Keywords
Code generation, Large Language Models, Security hardening, Model training
Discipline
Artificial Intelligence and Robotics | Software Engineering
Publication
Proceedings of the 33rd ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA 2024) : Vienna, Austria, September 16-20
First Page
1428
Last Page
1439
Identifier
10.1145/3650212.3680371
Publisher
Association for Computing Machinery
City or Country
New York, USA
Citation
LI, Dong; YAN Meng; ZHANG, Yaosheng; LIU, Zhongxin; LIU, Chao; ZHANG, Xiaohong; CHEN, Ting; and David LO.
CoSec : On-the-Fly security hardening of code LLMs via supervised co-decoding. (2024). Proceedings of the 33rd ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA 2024) : Vienna, Austria, September 16-20. 1428-1439.
Available at: https://ink.library.smu.edu.sg/sis_research/9918
Additional URL
https://doi.org/10.1145/3650212.3680371