Publication Type

Conference Proceeding Article

Version

publishedVersion

Publication Date

8-2024

Abstract

Markov decision processes (MDPs) provide a standard framework for sequential decision making under uncertainty. However, MDPs do not take uncertainty in transition probabilities into account. Robust Markov decision processes (RMDPs) address this shortcoming of MDPs by assigning to each transition an uncertainty set rather than a single probability value. In this work, we consider polytopic RMDPs in which all uncertainty sets are polytopes and study the problem of solving long-run average reward polytopic RMDPs. We present a novel perspective on this problem and show that it can be reduced to solving long-run average reward turn-based stochastic games with finite state and action spaces. This reduction allows us to derive several important consequences that were hitherto not known to hold for polytopic RMDPs. First, we derive new computational complexity bounds for solving long-run average reward polytopic RMDPs, showing for the first time that the threshold decision problem for them is in NP∩CONPandthattheyadmitarandomizedalgorithm with sub-exponential expected runtime. Second, we present Robust Polytopic Policy Iteration (RPPI), a novel policy iteration algorithm for solving long-run average reward polytopic RMDPs. Our experimental evaluation shows that RPPI is muchmoreefficient in solving long-run average reward polytopic RMDPs compared to state-of-theart methods based on value iteration.

Discipline

Artificial Intelligence and Robotics

Research Areas

Intelligent Systems and Optimization

Areas of Excellence

Digital transformation

Publication

Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence, Jeju, Korea, 2024 August 3-9

First Page

6707

Last Page

6715

Identifier

10.24963/ijcai.2024/741

Publisher

IJCAI

City or Country

Jeju, Korea

Additional URL

https://doi.org/10.24963/ijcai.2024/741

Share

COinS