Publication Type
Conference Proceeding Article
Version
acceptedVersion
Publication Date
4-2025
Abstract
Large Language Models (LLMs) generating unsafe responses to toxic prompts is a significant issue in their applications. While various efforts aim to address this safety concern, previous approaches often demand substantial human data collection or rely on the less dependable option of using another LLM to generate corrective data. In this paper, we aim to take this problem and overcome limitations of requiring significant high-quality human data. Our method requires only a small set of unsafe responses to toxic prompts, easily obtained from the unsafe LLM itself. By employing a semantic cost combined with a negative Earth Mover Distance (EMD) loss, we guide the LLM away from generating unsafe responses. Additionally, we propose a novel lower bound for EMD loss, enabling more efficient optimization. Our results demonstrate superior performance and data efficiency compared to baselines, and we further examine the nuanced effects of over-alignment and potential degradation of language capabilities when using contrastive data.
Keywords
Large Language Model, Safe Large Language Model, Earth Mover Distance, Supervised Fine-tuning
Discipline
Artificial Intelligence and Robotics
Research Areas
Intelligent Systems and Optimization
Areas of Excellence
Digital transformation
Publication
Proceedings of the Thirteenth International Conference on Learning Representations, ICLR 2025, Singapore, April 24-28
First Page
52123
Last Page
52135
ISBN
9798331320850
Publisher
ICLR
City or Country
Singapore
Citation
LU, Yuxiao; VARAKANTHAM, Pradeep; and VARAKANTHAM, Pradeep.
Semantic loss-guided data-efficient supervised fine-tuning for safe responses in LLMs. (2025). Proceedings of the Thirteenth International Conference on Learning Representations, ICLR 2025, Singapore, April 24-28. 52123-52135.
Available at: https://ink.library.smu.edu.sg/sis_research/10745
Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://openreview.net/forum?id=kO0DgO07hW