Publication Type

Conference Proceeding Article

Version

publishedVersion

Publication Date

2-2017

Abstract

A standard objective in partially-observable Markov decision processes (POMDPs) is to find a policy that maximizes the expected discounted-sum payoff. However, such policies may still permit unlikely but highly undesirable outcomes, which is problematic especially in safety-critical applications. Recently, there has been a surge of interest in POMDPs where the goal is to maximize the probability to ensure that the payoff is at least a given threshold, but these approaches do not consider any optimization beyond satisfying this threshold constraint. In this work we go beyond both the "expectation" and "threshold" approaches and consider a "guaranteed payoff optimization (GPO)" problem for POMDPs, where we are given a threshold t and the objective is to find a policy σ such that a) each possible outcome of σ yields a discounted-sum payoff of at least t, and b) the expected discounted-sum payoff of σ is optimal (or near-optimal) among all policies satisfying a). We present a practical approach to tackle the GPO problem and evaluate it on standard POMDP benchmarks.

Discipline

Artificial Intelligence and Robotics

Research Areas

Intelligent Systems and Optimization

Areas of Excellence

Digital transformation

Publication

AAAI'17: Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, California, February 4-9

First Page

3725

Last Page

3732

Identifier

10.5555/3298023.3298109

Publisher

AAAI

City or Country

Washington, DC

Copyright Owner and License

Authors

Additional URL

https://doi.org/10.5555/3298023.3298109

Share

COinS