Publication Type

Conference Proceeding Article

Publication Date

2-2017

Abstract

Decentralized Markov Decision Process (Dec-MDP) providesa rich framework to represent cooperative decentralizedand stochastic planning problems under transition uncertainty.However, solving a Dec-MDP to generate coordinatedyet decentralized policies is NEXP-Hard. Researchershave made significant progress in providing approximate approachesto improve scalability with respect to number ofagents. However, there has been little or no research devotedto finding guarantees on solution quality for approximateapproaches considering multiple (more than 2 agents)agents. We have a similar situation with respect to the competitivedecentralized planning problem and the StochasticGame (SG) model. To address this, we identify models in thecooperative and competitive case that rely on submodular rewards,where we show that existing approximate approachescan provide strong quality guarantees (a priori, and for cooperativecase also posteriori guarantees). We then providesolution approaches and demonstrate improved online guaranteeson benchmark problems from the literature for the cooperativecase.

Discipline

Databases and Information Systems | Software Engineering

Research Areas

Information Systems and Management

Publication

AAAI Conference on Artificial Intelligence (AAAI)

First Page

3021

Last Page

3028

Publisher

AAAI

City or Country

San Fransisco, USA

Creative Commons License

Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.

Share

COinS