Conference Proceeding Article
Decentralized POMDPs provide an expressive framework for multi-agent sequential decision making. While fnite-horizon DECPOMDPs have enjoyed signifcant success, progress remains slow for the infnite-horizon case mainly due to the inherent complexity of optimizing stochastic controllers representing agent policies. We present a promising new class of algorithms for the infnite-horizon case, which recasts the optimization problem as inference in a mixture of DBNs. An attractive feature of this approach is the straightforward adoption of existing inference techniques in DBNs for solving DEC-POMDPs and supporting richer representations such as factored or continuous states and actions. We also derive the Expectation Maximization (EM) algorithm to optimize the joint policy represented as DBNs. Experiments on benchmark domains show that EM compares favorably against the state-of-the-art solvers.
Artificial Intelligence and Robotics | Operations Research, Systems Engineering and Industrial Engineering
Intelligent Systems and Decision Analytics
Proceedings of the Twenty-Sixth Conference Conference on Uncertainty in Artificial Intelligence, July 8-11 2010, Catalina Island, CA
City or Country
KUMAR, Akshat and Zilberstein, Shlomo.
Anytime Planning for Decentralized POMDPs using Expectation Maximization. (2010). Proceedings of the Twenty-Sixth Conference Conference on Uncertainty in Artificial Intelligence, July 8-11 2010, Catalina Island, CA. 294-301. Research Collection School Of Information Systems.
Available at: http://ink.library.smu.edu.sg/sis_research/2209
Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.