Publication Type
Conference Proceeding Article
Version
acceptedVersion
Publication Date
5-2023
Abstract
Large-scale pre-trained models such as CodeBERT, GraphCodeBERT have earned widespread attention from both academia and industry. Attributed to the superior ability in code representation, they have been further applied in multiple downstream tasks such as clone detection, code search and code translation. However, it is also observed that these state-of-the-art pre-trained models are susceptible to adversarial attacks. The performance of these pre-trained models drops significantly with simple perturbations such as renaming variable names. This weakness may be inherited by their downstream models and thereby amplified at an unprecedented scale. To this end, we propose an approach namely ContraBERT that aims to improve the robustness of pre-trained models via contrastive learning. Specifically, we design nine kinds of simple and complex data augmentation operators on the programming language (PL) and natural language (NL) data to construct different variants. Furthermore, we continue to train the existing pre-trained models by masked language modeling (MLM) and contrastive pre-training task on the original samples with their augmented variants to enhance the robustness of the model. The extensive ex-periments demonstrate that ContraBERT can effectively improve the robustness of the existing pre-trained models. Further study also confirms that these robustness-enhanced models provide improvements as compared to original models over four popular downstream tasks.
Keywords
industries, computer languages, codes, perturbation methods, natural languages, cloning, data augmentation
Discipline
Artificial Intelligence and Robotics
Research Areas
Intelligent Systems and Optimization
Publication
Proceedings of the 45th International Conference on Software Engineering
First Page
2476
Last Page
2487
Identifier
10.1109/ICSE48619.2023.00207
Publisher
IEEE
City or Country
IEEE/ACM International Conference on Software Engineering
Citation
LIU, Shangqing; WU, Bozhi; XIE, Xiaofei; MENG, Guozhu; and LIU, Yang..
ContraBERT: Enhancing code pre-trained models via contrastive learning. (2023). Proceedings of the 45th International Conference on Software Engineering. 2476-2487.
Available at: https://ink.library.smu.edu.sg/sis_research/8228
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://doi.org/10.1109/ICSE48619.2023.00207