Publication Type
Conference Proceeding Article
Version
publishedVersion
Publication Date
7-2016
Abstract
Synergistic interactions between task/resource allocation and stochastic planning exist in many environments such as transportation and logistics, UAV task assignment and disaster rescue. Existing research in exploiting these synergistic interactions between the two problems have either only considered domains where tasks/resources are completely independent of each other or have focussed on approaches with limited scalability. In this paper, we address these two limitations by introducing a generic model for task/resource constrained multi-agent stochastic planning, referred to as TasC-MDPs. We provide two scalable greedy algorithms, one of which provides posterior quality guarantees. Finally, we illustrate the high scalability and solution performance of our approaches in comparison with existing work on two benchmark problems from the literature.
Keywords
Markov Decision Problems, Multi-Agent Planning, Reasoning with Uncertainty
Discipline
Artificial Intelligence and Robotics | Theory and Algorithms
Research Areas
Intelligent Systems and Optimization
Publication
Proceedings of the 25th International Joint Conference on Artificial Intelligence IJCAI 2016: New York, July 9-15
First Page
10
Last Page
16
ISBN
9781577357704
Publisher
AAAI Press
City or Country
Palo Alto, CA
Citation
AGRAWAL, Pritee; Pradeep VARAKANTHAM; and YEOH, William.
Scalable greedy algorithms for task/resource constrained multi-agent stochastic planning. (2016). Proceedings of the 25th International Joint Conference on Artificial Intelligence IJCAI 2016: New York, July 9-15. 10-16.
Available at: https://ink.library.smu.edu.sg/sis_research/3600
Copyright Owner and License
Authors
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://www.ijcai.org/Proceedings/16/Papers/009.pdf