Publication Type

Journal Article

Publication Date

2014

Abstract

We introduce a class of nite-horizon dynamic optimization problems that we call multi- action stochastic dynamic programs (DPs). Their distinguishing feature is that the decision in each state is a multi-dimensional vector. These problems can in principle be solved using Bellman's backward recursion. However, complexity of this procedure grows exponentially in the dimension of the decision vectors. This is called the curse of action-space dimensionality. To overcome this computational challenge, we propose an approximation algorithm rooted in the game theoretic paradigm of Sampled Fictitious Play (SFP). SFP solves a sequence of DPs with a one-dimensional action-space, which are exponentially smaller than the original multi-action stochastic DP. In particular, the computational e ort in a xed number of SFP iterations is linear in the dimension of the decision vectors. We show that the sequence of SFP iterates converges to a local optimum, and present a numerical case study in manufacturing where SFP is able to nd solutions with objective values within 1% of the optimal objective value hundreds of times faster than the time taken by backward recursion. In this case study, SFP solutions are also better by a statistically signi cant margin than those found by a one-step lookahead heuristic.

Discipline

Artificial Intelligence and Robotics | Business | Operations Research, Systems Engineering and Industrial Engineering

Research Areas

Intelligent Systems and Decision Analytics

Publication

IIE Transactions

Volume

46

Issue

7

First Page

742

Last Page

756

ISSN

0740-817X

Identifier

10.1080/0740817X.2013.857062

Publisher

Taylor and Francis

Creative Commons License

Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.

Additional URL

http://dx.doi.org/10.1080/0740817X.2013.857062