Conference Proceeding Article
Decentralized (PO)MDPs provide an expressive framework for sequential decision making in a multiagent system. Given their computational complexity, recent research has focused on tractable yet practical subclasses of Dec-POMDPs. We address such a subclass called CDec-POMDP where the collective behavior of a population of agents affects the joint-reward and environment dynamics. Our main contribution is an actor-critic (AC) reinforcement learning method for optimizing CDec-POMDP policies. Vanilla AC has slow convergence for larger problems. To address this, we show how a particular decomposition of the approximate action-value function over agents leads to effective updates, and also derive a new way to train the critic based on local reward signals. Comparisons on a synthetic benchmark and a real world taxi fleet optimization problem show that our new AC approach provides better quality solutions than previous best approaches.
Artificial Intelligence and Robotics | Computer Sciences | Operations Research, Systems Engineering and Industrial Engineering
Intelligent Systems and Decision Analytics
Advances in Neural Information Processing Systems: Proceedings of NIPS 2017, December 4-9, Long Beach
City or Country
La Jolla, CA
NGUYEN, Duc Thien; KUMAR, Akshat; and LAU, Hoong Chuin.
Policy gradient with value function approximation for collective multiagent planning. (2017). Advances in Neural Information Processing Systems: Proceedings of NIPS 2017, December 4-9, Long Beach. 1-11. Research Collection School Of Information Systems.
Available at: http://ink.library.smu.edu.sg/sis_research/3871
Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.