Publication Type

Conference Proceeding Article

Version

publishedVersion

Publication Date

2-2024

Abstract

A popular framework for enforcing safe actions in Rein- forcement Learning (RL) is Constrained RL, where trajectory based constraints on expected cost (or other cost measures) are employed to enforce safety and more importantly these constraints are enforced while maximizing expected reward. Most recent approaches for solving Constrained RL convert the trajectory based cost constraint into a surrogate problem that can be solved using minor modifications to RL methods. A key drawback with such approaches is an over or under- estimation of the cost constraint at each state. Therefore, we provide an approach that does not modify the trajectory based cost constraint and instead imitates “good” trajectories and avoids “bad” trajectories generated from incrementally im- proving policies. We employ an oracle that utilizes a reward threshold (which is varied with learning) and the overall cost constraint to label trajectories as “good” or “bad”. A key ad- vantage of our approach is that we are able to work from any starting policy or set of trajectories and improve on it. In an exhaustive set of experiments, we demonstrate that our ap- proach is able to outperform top benchmark approaches for solving Constrained RL problems, with respect to expected cost, CVaR cost, or even unknown cost constraints.

Keywords

Safe reinforcement learning, Imitation learning

Discipline

Artificial Intelligence and Robotics

Research Areas

Information Systems and Management; Intelligent Systems and Optimization

Areas of Excellence

Digital transformation

Publication

Proceedings of the 38th Annual AAAI Conference on Artificial Intelligence, Vancouver, Canada

Publisher

Association for the Advancement of Artificial Intelligence (

City or Country

Vancouver, Canada

Share

COinS