Publication Type
Conference Proceeding Article
Version
publishedVersion
Publication Date
3-2024
Abstract
Constrained Reinforcement Learning employs trajectory-based cost constraints (such as expected cost, Value at Risk, or Conditional VaR cost) to compute safe policies. The challenge lies in handling these constraints effectively while optimizing expected reward. Existing methods convert such trajectory-based constraints into local cost constraints, but they rely on cost estimates, leading to either aggressive or conservative solutions with regards to cost. We propose an unconstrained formulation that employs reward penalties over states augmented with costs to compute safe policies. Unlike standard primal-dual methods, our approach penalizes only infeasible trajectories through state augmentation. This ensures that increasing the penalty parameter always guarantees a feasible policy, a feature lacking in primal-dual methods. Our approach exhibits strong empirical performance and theoretical properties, offering a fresh paradigm for solving complex Constrained RL problems, including rich constraints like expected cost, Value at Risk, and Conditional Value at Risk. Our experimental results demonstrate superior performance compared to leading approaches across various constraint types on multiple benchmark problems.
Keywords
Safe reinforcement learning, Reward penalties, Constraint optimization, Reinforcement learning, Markov models (MDPs, POMDPs), Stochastic optimization
Discipline
Artificial Intelligence and Robotics
Areas of Excellence
Digital transformation
Publication
Proceedings of the 38th Annual AAAI Conference on Artificial Intelligence : Vancouver, Canada, February 20-27
Volume
38
First Page
19867
Last Page
19875
ISBN
21595399
Identifier
10.1609/aaai.v38i18.29962
Publisher
Association for the Advancement of Artificial Intelligence
City or Country
Vancouver, Canada
Citation
HAO, Jiang; MAI, Tien; VARAKANTHAN, Pradeep; and HOANG, Minh Huy.
Reward penalties on augmented states for solving richly constrained RL effectively. (2024). Proceedings of the 38th Annual AAAI Conference on Artificial Intelligence : Vancouver, Canada, February 20-27. 38, 19867-19875.
Available at: https://ink.library.smu.edu.sg/sis_research/9685
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://doi.org/10.1609/aaai.v38i18.29962