Publication Type

Conference Proceeding Article

Version

Postprint

Publication Date

2013

Abstract

The Decentralized Partially Observable Markov Decision Process (Dec-POMDP) is a powerful model for multi-agent planning under uncertainty, but its applicability is hindered by its high complexity – solving Dec-POMDPs optimally is NEXP-hard. Recently, Kumar et al. introduced the Value Factorization (VF) framework, which exploits decomposable value functions that can be factored into subfunctions. This framework has been shown to be a generalization of several specialized models such as TI-Dec-MDPs, ND-POMDPs and TD-POMDPs, which leverage different forms of sparse agent interactions to improve the scalability of planning. Existing algorithms for these models assume that the interaction graph of the problem is given. So far, no studies have addressed the generation of interaction graphs. In this paper, we address this gap by introducing three algorithms to automatically generate interaction graphs for models within the VF framework and establish lower and upper bounds on the expected reward of an optimal joint policy. We illustrate experimentally the bene- fits of these techniques for sensor placement in a decentralized tracking application.

Discipline

Artificial Intelligence and Robotics | Computer Sciences

Research Areas

Intelligent Systems and Decision Analytics

Publication

IJCAI '13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence

First Page

411

Last Page

417

ISBN

9781577356332

City or Country

Beijing, China

Creative Commons License

Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.

Share

COinS