Publication Type

Conference Proceeding Article

Version

publishedVersion

Publication Date

6-2012

Abstract

Distributed constraint optimization problems (DCOPs) are well-suited for modeling multi-agent coordination problems where the primary interactions are between local subsets of agents. However, one limitation of DCOPs is the assumption that the constraint rewards are without uncertainty. Researchers have thus extended DCOPs to Stochastic DCOPs (SDCOPs), where rewards are sampled from known probability distribution reward functions, and introduced algorithms to find solutions with the largest expected reward. Unfortunately, such a solution might be very risky, that is, very likely to result in a poor reward. Thus, in this paper, we make three contributions: (1) we propose a stricter objective for SDCOPs, namely to find a solution with the most stochastically dominating probability distribution reward function; (2) we introduce an algorithm to find such solutions; and (3) we show that stochastically dominating solutions can indeed be less risky than expected reward maximizing solutions.

Keywords

DCOP, DPOP, Stochastic Dominance, Uncertainty

Discipline

Artificial Intelligence and Robotics | Computer Sciences | Operations Research, Systems Engineering and Industrial Engineering

Publication

AAMAS '12: Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems: 4-8 June 2012, Valencia, Spain

Volume

1

First Page

272

Last Page

279

ISBN

9780981738116

Publisher

IFAAMAS

City or Country

Richland, SC

Copyright Owner and License

LARC

Additional URL

http://dl.acm.org/citation.cfm?id=2343613

Share

COinS