Publication Type

Conference Proceeding Article

Version

publishedVersion

Publication Date

7-2014

Abstract

Most non-trivial problems require the coordinated performance of multiple goal-oriented and time-critical tasks. Coordinating the performance of the tasks is required due to the dependencies among the tasks and the sharing of resources. In this work, an agent learns to perform a task using reinforcement learning with a self-organizing neural network as the function approximator. We propose a novel coordination strategy integrating Motivated Learning (ML) and a self-organizing neural network for multi-agent reinforcement learning (MARL). Specifically, we adapt the ML idea of using pain signal to overcome the resource competition issue. Dependency among the agents is resolved using domain knowledge of their dependence. To avoid domineering agents, the task goals are staggered over multiple stages. A stage is completed by attaining a particular combination of task goals. Results from our experiments conducted using a popular PC-based game known as Starcraft Broodwar show goals of multiple tasks can be attained efficiently using our proposed coordination strategy.

Keywords

Games, Learning (artificial intelligence), Vectors, Neural networks, Real-time systems

Discipline

Databases and Information Systems | OS and Networks

Publication

2014 International Joint Conference on Neural Networks (IJCNN): Beijing, July 6-11: Proceedings

First Page

4229

Last Page

4236

ISBN

9781479914845

Identifier

10.1109/IJCNN.2014.6889624

Publisher

IEEE

City or Country

Piscataway, NJ

Additional URL

https://doi.org/10.1109/IJCNN.2014.6889624

Share

COinS