Publication Type

Conference Proceeding Article

Version

acceptedVersion

Publication Date

12-2023

Abstract

Existing methods for generating adversarial code examples face several challenges: limted availability of substitute variables, high verification costs for these substitutes, and the creation of adversarial samples with noticeable perturbations. To address these concerns, our proposed approach, RNNS, uses a search seed based on historical attacks to find potential adversarial substitutes. Rather than directly using the discrete substitutes, they are mapped to a continuous vector space using a pre-trained variable name encoder. Based on the vector representation, RNNS predicts and selects better substitutes for attacks. We evaluated the performance of RNNS across six coding tasks encompassing three programming languages: Java, Python, and C. We employed three pre-trained code models (CodeBERT, GraphCodeBERT, and CodeT5) that resulted in a cumulative of 18 victim models. The results demonstrate that RNNS outperforms baselines in terms of ASR and QT. Furthermore, the perturbation of adversarial examples introduced by RNNS is smaller compared to the baselines in terms of the number of replaced variables and the change in variable length. Lastly, our experiments indicate that RNNS is efficient in attacking defended models and can be employed for adversarial training.

Discipline

Databases and Information Systems | Programming Languages and Compilers

Research Areas

Data Science and Engineering

Publication

Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, Singapore, December 6-10

First Page

9706

Last Page

9716

Identifier

10.18653/v1/2023.findings-emnlp.649

Publisher

Association for Computational Linguistics

City or Country

Texas

Additional URL

https://doi.org/10.18653/v1/2023.findings-emnlp.649

Share

COinS