Publication Type
Conference Proceeding Article
Version
publishedVersion
Publication Date
12-2023
Abstract
Large language models (LLMs) are susceptible to red teaming attacks, which can induce LLMs to generate harmful content. Previous research constructs attack prompts via manual or automatic methods, which have their own limitations on construction cost and quality. To address these issues, we propose an integrated approach that combines manual and automatic methods to economically generate high-quality attack prompts. Specifically, considering the impressive capabilities of newly emerged LLMs, we propose an attack framework to instruct LLMs to mimic human-generated prompts through in-context learning. Furthermore, we propose a defense framework that fine-tunes victim LLMs through iterative interactions with the attack framework to enhance their safety against red teaming attacks. Extensive experiments on different LLMs validate the effectiveness of our proposed attack and defense frameworks. Additionally, we release a series of attack prompts datasets named SAP with varying sizes, facilitating the safety evaluation and enhancement of more LLMs.
Discipline
Programming Languages and Compilers
Research Areas
Data Science and Engineering
Areas of Excellence
Digital transformation
Publication
Proceeding of the 2023 Findings of the Association for Computational Linguistics, Singapore, December 6-10
First Page
2176
Last Page
2189
ISBN
9798891760615
Identifier
10.18653/v1/2023.findings-emnlp.143
Publisher
Association for Computational Linguistics
City or Country
USA
Citation
DENG, Boyi; WANG, Wenjie; FENG, Fuli; DENG, Yang; WANG, Qifan; and HE, Xiangnan.
Attack prompt generation for red teaming and defending large language models. (2023). Proceeding of the 2023 Findings of the Association for Computational Linguistics, Singapore, December 6-10. 2176-2189.
Available at: https://ink.library.smu.edu.sg/sis_research/9118
Copyright Owner and License
Authors
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://doi.org/10.18653/v1/2023.findings-emnlp.143