Publication Type
Conference Proceeding Article
Version
publishedVersion
Publication Date
2-2023
Abstract
One approach to guaranteeing safety in Reinforcement Learning is through cost constraints that are imposed on trajectories. Recent works in constrained RL have developed methods that ensure constraints can be enforced even at learning time while maximizing the overall value of the policy. Unfortunately, as demonstrated in our experimental results, such approaches do not perform well on complex multi-level tasks, with longer episode lengths or sparse rewards. To that end, wepropose a scalable hierarchical approach for constrained RL problems that employs backward cost value functions in the context of task hierarchy and a novel intrinsic reward function in lower levels of the hierarchy to enable cost constraint enforcement. One of our key contributions is in proving that backward value functions are theoretically viable even when there are multiple levels of decision making. We also show that our new approach, referred to as Hierarchically Limited consTraint Enforcement (HiLiTE) significantly improves on state of the art Constrained RL approaches for many benchmark problems from literature. We further demonstrate that this performance (on value and constraint enforcement) clearly outperforms existing best approaches for constrained RL and hierarchical RL.
Keywords
reinforcement learning
Discipline
Artificial Intelligence and Robotics
Research Areas
Intelligent Systems and Optimization
Publication
Proceedings of the AAAI Conference on Artificial Intelligence
Volume
37
First Page
15055
Last Page
15063
Identifier
10.1609/aaai.v37i12.26757
Publisher
Association for the Advancement of Artificial Intelligence
City or Country
Washington DC
Citation
PATHMANATHAN, Pankayaraj and VARAKANTHAM, Pradeep.
Constrained reinforcement learning in hard exploration problems. (2023). Proceedings of the AAAI Conference on Artificial Intelligence. 37, 15055-15063.
Available at: https://ink.library.smu.edu.sg/sis_research/8590
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://doi.org/10.1609/aaai.v37i12.26757