Publication Type

Conference Proceeding Article

Version

publishedVersion

Publication Date

2-2017

Abstract

Decentralized Markov Decision Process (Dec-MDP) providesa rich framework to represent cooperative decentralizedand stochastic planning problems under transition uncertainty.However, solving a Dec-MDP to generate coordinatedyet decentralized policies is NEXP-Hard. Researchershave made significant progress in providing approximate approachesto improve scalability with respect to number ofagents. However, there has been little or no research devotedto finding guarantees on solution quality for approximateapproaches considering multiple (more than 2 agents)agents. We have a similar situation with respect to the competitivedecentralized planning problem and the StochasticGame (SG) model. To address this, we identify models in thecooperative and competitive case that rely on submodular rewards,where we show that existing approximate approachescan provide strong quality guarantees (a priori, and for cooperativecase also posteriori guarantees). We then providesolution approaches and demonstrate improved online guaranteeson benchmark problems from the literature for the cooperativecase.

Keywords

Multiagent Systems, Planning under uncertainty

Discipline

Artificial Intelligence and Robotics | Operations Research, Systems Engineering and Industrial Engineering

Research Areas

Intelligent Systems and Optimization

Publication

Proceedings of the 31st AAAI Conference on Artificial Intelligence 2017: San Francisco, February 4-10

First Page

3021

Last Page

3028

Publisher

AAAI Press

City or Country

Menlo Park, CA

Copyright Owner and License

Publisher

Additional URL

https://aaai.org/ocs/index.php/AAAI/AAAI17/paper/view/14928

Share

COinS