Publication Type
Conference Proceeding Article
Version
publishedVersion
Publication Date
7-2025
Abstract
Large Language Models (LLMs) are vulnerable to backdoor attacks that manipulate outputs via hidden triggers. Existing defense methods—designed for vision/text classification tasks—fail for text generation. We propose Internal Consistency Regularization (CROW), a defense leveraging the observation that backdoored models exhibit unstable layer-wise hidden representations when triggered, while clean models show smooth transitions. CROW enforces consistency across layers via adversarial perturbations and regularization during finetuning, neutralizing backdoors without requiring clean reference models or trigger knowledge—only a small clean dataset. Experiments across Llama-2 (7B, 13B), CodeLlama (7B, 13B), and Mistral-7B demonstrate CROW’s effectiveness: it achieves significant reductions in attack success rates across diverse backdoor strategies (sentiment steering, targeted refusal, code injection) while preserving generative performance. CROW’s architectureagnostic design enables practical deployment.
Discipline
Programming Languages and Compilers | Software Engineering
Research Areas
Software and Cyber-Physical Systems
Areas of Excellence
Digital transformation
Publication
Proceedings of the 42nd International Conference on Machine Learning, Vancouver, Canada, 2025 July 13-19
First Page
1
Last Page
20
Identifier
10.48550/arXiv.2411.12768
City or Country
Canada
Citation
MIN, Nay Myat; PHAM, Long H.; LI, Yige; and SUN, Jun.
CROW: Eliminating backdoors from large language models via internal consistency regularization. (2025). Proceedings of the 42nd International Conference on Machine Learning, Vancouver, Canada, 2025 July 13-19. 1-20.
Available at: https://ink.library.smu.edu.sg/sis_research/10281
Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://doi.org/10.48550/arXiv.2411.12768