Publication Type

Journal Article

Version

submittedVersion

Publication Date

12-2021

Abstract

Coordinated control of multi-agent teams is an important task in many real-time strategy (RTS) games. In most prior work, micromanagement is the commonly used strategy whereby individual agents operate independently and make their own combat decisions. On the other extreme, some employ a macromanagement strategy whereby all agents are controlled by a single decision model. In this paper, we propose a hierarchical command and control architecture, consisting of a single high-level and multiple low-level reinforcement learning agents operating in a dynamic environment. This hierarchical model enables the low-level unit agents to make individual decisions while taking commands from the high-level commander agent. Compared with prior approaches, the proposed model provides the benefits of both flexibility and coordinated control. The performance of such hierarchical control model is demonstrated through empirical experiments in a real-time strategy game known as StarCraft: Brood War (SCBW).

Keywords

Hierarchical control, Self-organizing neural networks, Reinforcement learning, Real-time strategy games

Discipline

Artificial Intelligence and Robotics | Numerical Analysis and Scientific Computing

Research Areas

Intelligent Systems and Optimization

Publication

Expert Systems with Applications

Volume

186

First Page

1

Last Page

44

ISSN

0957-4174

Identifier

10.1016/j.eswa.2021.115707

Publisher

Elsevier

Copyright Owner and License

Authors

Additional URL

https://doi.org/10.1016/j.eswa.2021.115707

Share

COinS