Conference Proceeding Article
Synergistic interactions between task/resource allocation and stochastic planning exist in many environments such as transportation and logistics, UAV task assignment and disaster rescue. Existing research in exploiting these synergistic interactions between the two problems have either only considered domains where tasks/resources are completely independent of each other or have focussed on approaches with limited scalability. In this paper, we address these two limitations by introducing a generic model for task/resource constrained multi-agent stochastic planning, referred to as TasC-MDPs. We provide two scalable greedy algorithms, one of which provides posterior quality guarantees. Finally, we illustrate the high scalability and solution performance of our approaches in comparison with existing work on two benchmark problems from the literature.
Markov Decision Problems, Multi-Agent Planning, Reasoning with Uncertainty
Theory and Algorithms
Intelligent Systems and Decision Analytics
Proceedings on the International Joint Conference on Artificial Intelligence (IJCAI-16), New York, USA, 2016 July 9-15
City or Country
AGRAWAL PRITEE; Pradeep VARAKANTHAM; and YEOH, William.
Scalable greedy algorithms for task/resource constrained multi-agent stochastic planning. (2016). Proceedings on the International Joint Conference on Artificial Intelligence (IJCAI-16), New York, USA, 2016 July 9-15. Research Collection School Of Information Systems.
Available at: http://ink.library.smu.edu.sg/sis_research/3600
Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.