Probabilistic Inference Techniques for Scalable Multiagent Decision Making
Decentralized POMDPs provide an expressive framework for multiagent sequential decision making. However, the complexity of these models---NEXP-Complete even for two agents---has limited their scalability. We present a promising new class of approximation algorithms by developing novel connections between multiagent planning and machine learning. We show how the multiagent planning problem can be reformulated as inference in a mixture of dynamic Bayesian networks (DBNs). This planning-as-inference approach paves the way for the application of efficient inference techniques in DBNs to multiagent decision making. To further improve scalability, we identify certain conditions that are sufficient to extend the approach to multiagent systems with dozens of agents. Specifically, we show that the necessary inference within the expectation-maximization framework can be decomposed into processes that often involve a small subset of agents, thereby facilitating scalability. We further show that a number of existing multiagent planning models satisfy these conditions. Experiments on large planning benchmarks confirm the benefits of our approach in terms of runtime and scalability with respect to existing techniques.
Artificial Intelligence and Robotics | Computer Sciences
Intelligent Systems and Decision Analytics
Journal of Artificial Intelligence Research
Association for the Advancement of Artificial Intelligence / AI Access Foundation
Akshat KUMAR; ZILBERSTEIN, Shlomo; and TOUSSAINT, Marc.
Probabilistic Inference Techniques for Scalable Multiagent Decision Making. (2015). Journal of Artificial Intelligence Research. 53, 223-270. Research Collection School Of Information Systems.
Available at: http://ink.library.smu.edu.sg/sis_research/3076