Conference Proceeding Article
Synergistic interactions between task/resource allocation and stochastic planning exist in many environments such as transportation and logistics, UAV task assignment and disaster rescue. Existing research in exploiting these synergistic interactions between the two problems have either only considered domains where tasks/resources are completely independent of each other or have focussed on approaches with limited scalability. In this paper, we address these two limitations by introducing a generic model for task/resource constrained multi-agent stochastic planning, referred to as TasC-MDPs. We provide two scalable greedy algorithms, one of which provides posterior quality guarantees. Finally, we illustrate the high scalability and solution performance of our approaches in comparison with existing work on two benchmark problems from the literature.
Markov Decision Problems, Multi-Agent Planning, Reasoning with Uncertainty
Artificial Intelligence and Robotics | Theory and Algorithms
Intelligent Systems and Optimization
Proceedings of the 25th International Joint Conference on Artificial Intelligence IJCAI 2016: New York, July 9-15
City or Country
Palo Alto, CA
AGRAWAL, Pritee; Pradeep VARAKANTHAM; and YEOH, William.
Scalable greedy algorithms for task/resource constrained multi-agent stochastic planning. (2016). Proceedings of the 25th International Joint Conference on Artificial Intelligence IJCAI 2016: New York, July 9-15. 10-16. Research Collection School Of Information Systems.
Available at: http://ink.library.smu.edu.sg/sis_research/3600
Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.