Publication Type

Conference Proceeding Article

Version

publishedVersion

Publication Date

7-2025

Abstract

Aligning Large Language Model (LLM) responses with human preferences is vital for building safe and controllable AI systems. While preference optimization methods based on PlackettLuce (PL) and Bradley-Terry (BT) models have shown promise, they face challenges such as poor handling of harmful content, inefficient use of dispreferred responses, and, specifically for PL, high computational costs. To address these issues, we propose Hard Preference Sampling (HPS), a novel framework for robust and efficient human preference alignment. HPS introduces a training loss that prioritizes the most preferred response while rejecting all dispreferred and harmful ones. It emphasizes “hard” dispreferred responses — those closely resembling preferred ones — to enhance the model’s rejection capabilities. By leveraging a single-sample Monte Carlo sampling strategy, HPS reduces computational overhead while maintaining alignment quality. Theoretically, HPS improves sample efficiency over existing PL methods and maximizes the reward margin between preferred and dispreferred responses, ensuring clearer distinctions. Experiments on HH-RLHF and PKU-Safety datasets validate HPS’s effectiveness, achieving comparable BLEU and reward scores while greatly improving reward margins and thus reducing harmful content generation. The source code is available at https://github.com/LVLab-SMU/HPS.

Keywords

Alignment, Preference Optimization, RLHF, Large Language Models

Discipline

Programming Languages and Compilers

Research Areas

Intelligent Systems and Optimization

Areas of Excellence

Digital transformation

Publication

Proceedings of the 42nd International Conference on Machine Learning, ICML 2025, Vancouver, Canada, July 13-19

First Page

1

Last Page

24

City or Country

Vancouver, Canada

Additional URL

https://openreview.net/forum?id=hLvWwRZkok

Share

COinS