Publication Type

Conference Proceeding Article

Version

publishedVersion

Publication Date

1-2015

Abstract

Markov Decision Problems, MDPs offer an effective mechanism for planning under uncertainty. However, due to unavoidable uncertainty over models, it is difficult to obtain an exact specification of an MDP. We are interested in solving MDPs, where transition and reward functions are not exactly specified. Existing research has primarily focussed on computing infinite horizon stationary policies when optimizing robustness, regret and percentile based objectives. We focus specifically on finite horizon problems with a special emphasis on objectives that are separable over individual instantiations of model uncertainty (i.e., objectives that can be expressed as a sum over instantiations of model uncertainty): (a) First, we identify two separable objectives for uncertain MDPs: Average Value Maximization (AVM) and Confidence Probability Maximisation (CPM). (b) Second, we provide optimization based solutions to compute policies for uncertain MDPs with such objectives. In particular, we exploit the separability of AVM and CPM objectives by employing Lagrangian dual decomposition (LDD). (c) Finally, we demonstrate the utility of the LDD approach on a benchmark problem from the literature.

Keywords

Markov Decision Problems (MDPs), Lagrangian Dual Decomposition, Bayesian Reinforcement Learning, Robust MDPs

Discipline

Artificial Intelligence and Robotics | Computer Sciences | Numerical Analysis and Scientific Computing

Research Areas

Intelligent Systems and Optimization

Publication

Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence: 25-30 January 2015, Austin, Texas USA

First Page

3454

Last Page

3460

ISBN

9781577356981

Publisher

AAAI Press

City or Country

Palo Alto, CA

Copyright Owner and License

Publisher

Additional URL

https://www.aaai.org/ocs/index.php/AAAI/AAAI15/paper/view/9843

Share

COinS