Publication Type

Conference Proceeding Article

Version

publishedVersion

Publication Date

5-2006

Abstract

Distributed Partially Observable Markov Decision Problems (Distributed POMDPs) are evolving as a popular approach for modeling multiagent systems, and many different algorithms have been proposed to obtain locally or globally optimal policies. Unfortunately, most of these algorithms have either been explicitly designed or experimentally evaluated assuming knowledge of a starting belief point, an assumption that often does not hold in complex, uncertain domains. Instead, in such domains, it is important for agents to explicitly plan over continuous belief spaces. This paper provides a novel algorithm to explicitly compute finite horizon policies over continuous belief spaces, without restricting the space of policies. By marrying an efficient single-agent POMDP solver with a heuristic distributed POMDP policy-generation algorithm, locally optimal joint policies are obtained, each of which dominates within a different part of the belief region. We provide heuristics that significantly improve the efficiency of the resulting algorithm and provide detailed experimental results. To the best of our knowledge, these are the first run-time results for analytically generating policies over continuous belief spaces in distributed POMDPs.

Discipline

Artificial Intelligence and Robotics | Business | Operations Research, Systems Engineering and Industrial Engineering

Publication

AAMAS '06: Proceedings of the Fifth International Conference on Autonomous Agents and Multi Agent Systems: Hakodate, Japan, May 8-12, 2006

First Page

289

Last Page

296

ISBN

9781595933034

Identifier

10.1145/1160633.1160683

Publisher

ACM

City or Country

New York

Additional URL

http://dx.doi.org/10.1145/1160633.1160683

Share

COinS