Publication Type
Conference Proceeding Article
Version
acceptedVersion
Publication Date
8-2014
Abstract
This work addresses the coordination issue in distributed optimization problem (DOP) where multiple distinct and time-critical tasks are performed to satisfy a global objective function. The performance of these tasks has to be coordinated due to the sharing of consumable resources and the dependency on non-consumable resources. Knowing that it can be sub-optimal to predefine the performance of the tasks for large DOPs, the multi-agent reinforcement learning (MARL) framework is adopted wherein an agent is used to learn the performance of each distinct task using reinforcement learning. To coordinate MARL, we propose a novel coordination strategy integrating Motivated Learning (ML) and the k-Winner-Take-All (k-WTA) approach. The priority of the agents to the shared resources is determined using Motivated Learning in real time. Due to the finite amount of the shared resources, the k-WTA approach is used to allow for the maximum number of the most urgent tasks to execute. Agents performing tasks dependent on resources produced by other agents are coordinated using domain knowledge. Comparing our proposed contribution to the existing approaches, results from our experiments based on a 16-task DOP and a 68-task DOP show our proposed approach to be most effective in coordinating multi-agent reinforcement learning.
Keywords
Reinforcement learning, Multi-agent reinforcement learning
Discipline
Artificial Intelligence and Robotics | Databases and Information Systems
Research Areas
Data Science and Engineering
Publication
2014 IEEE/WIC/ACM International Joint Conferences on Web Intelligence (WI) and Intelligent Agent Technologies (IAT): Warsaw, August 11-14
First Page
190
Last Page
197
ISBN
9781479941438
Identifier
10.1109/IJCNN.2014.6889624
Publisher
IEEE Computer Society
City or Country
Los Alamitos, CA
Citation
1
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://doi.org/10.1109/WI-IAT.2012.154