Publication Type

Journal Article

Version

acceptedVersion

Publication Date

3-2019

Abstract

Researchers have used distributed constraint optimization problems (DCOPs) to model various multi-agent coordination and resource allocation problems. Very recently, Ottens et al. proposed a promising new approach to solve DCOPs that is based on confidence bounds via their Distributed UCT (DUCT) sampling-based algorithm. Unfortunately, its memory requirement per agent is exponential in the number of agents in the problem, which prohibits it from scaling up to large problems. Thus, in this article, we introduce two new sampling-based DCOP algorithms called Sequential Distributed Gibbs (SD-Gibbs) and Parallel Distributed Gibbs (PD-Gibbs). Both algorithms have memory requirements per agent that is linear in the number of agents in the problem. Our empirical results show that our algorithms can find solutions that are better than DUCT, run faster than DUCT, and solve some large problems that DUCT failed to solve due to memory limitations. © 2019 AI Access Foundation. All rights reserved.

Keywords

Constrained optimization, Ducts, Multi agent systems, Confidence bounds, Distributed constraint optimizations, Large problems, Memory requirements, Multi-agent coordinations, Resource allocation problem, Sampling-based, Sampling-based algorithms, Problem solving

Discipline

Artificial Intelligence and Robotics | Theory and Algorithms

Research Areas

Intelligent Systems and Optimization

Publication

Journal of Artificial Intelligence Research

Volume

64

First Page

705

Last Page

748

ISSN

1076-9757

Identifier

10.1613/jair.1.11400

Publisher

AI Access Foundation

Copyright Owner and License

Authors

Additional URL

https://doi.org/10.1613/jair.1.11400

Share

COinS