Publication Type

Conference Proceeding Article

Version

publishedVersion

Publication Date

6-2012

Abstract

An interesting class of multi-agent POMDP planning problems can be solved by having agents iteratively solve individual POMDPs, find interactions with other individual plans, shape their transition and reward functions to encourage good interactions and discourage bad ones and then recompute a new plan. D-TREMOR showed that this approach can allow distributed planning for hundreds of agents. However, the quality and speed of the planning process depends on the prioritization scheme used. Lower priority agents shape their models with respect to the models of higher priority agents. In this paper, we introduce a new prioritization scheme that is guaranteed to converge and is empirically better, in terms of solution quality and planning time, than the existing prioritization scheme for some problems.

Keywords

DEC-POMDP, Uncertainty, Multi-Agent Systems

Discipline

Artificial Intelligence and Robotics | Business | Operations Research, Systems Engineering and Industrial Engineering

Publication

AAMAS '12: Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems, June 2012, Valencia Spain

Volume

3

First Page

1269

Last Page

1270

ISBN

9780981738130

Publisher

ACM

City or Country

Valencia, Spain

Copyright Owner and License

LARC

Additional URL

http://dl.acm.org/citation.cfm?id=2343896.2343957

Share

COinS