Publication Type
Journal Article
Version
publishedVersion
Publication Date
10-2011
Abstract
Topology-based multi-agent systems (TMAS), wherein agents interact with one another according to their spatial relationship in a network, are well suited for problems with topological constraints. In a TMAS system, however, each agent may have a different state space, which can be rather large. Consequently, traditional approaches to multi-agent cooperative learning may not be able to scale up with the complexity of the network topology. In this paper, we propose a cooperative learning strategy, under which autonomous agents are assembled in a binary tree formation (BTF). By constraining the interaction between agents, we effectively unify the state space of individual agents and enable policy sharing across agents. Our complexity analysis indicates that multi-agent systems with the BTF have a much smaller state space and a higher level of flexibility, compared with the general form of n-ary (n > 2) tree formation. We have applied the proposed cooperative learning strategy to a class of reinforcement learning agents known as temporal difference-fusion architecture for learning and cognition (TD-FALCON). Comparative experiments based on a generic network routing problem, which is a typical TMAS domain, show that the TD-FALCON BTF teams outperform alternative methods, including TD-FALCON teams in single agent and n-ary tree formation, a Q-learning method based on the table lookup mechanism, as well as a classical linear programming algorithm. Our study further shows that TD-FALCON BTF can adapt and function well under various scales of network complexity and traffic volume in TMAS domains.
Keywords
Topology-based multi-agent systems, Cooperative learning, Reinforcement learning, Binary tree formation, Policy sharing
Discipline
Databases and Information Systems | Programming Languages and Compilers | Software Engineering
Research Areas
Data Science and Engineering
Publication
Autonomous Agents and Multi-Agent Systems
Volume
26
Issue
1
First Page
86
Last Page
119
ISSN
1387-2532
Identifier
10.1007/s10458-011-9183-4
Publisher
Springer Verlag (Germany)
Citation
XIAO, Dan and TAN, Ah-hwee.
Cooperative reinforcement learning in topology-based multi-agent systems. (2011). Autonomous Agents and Multi-Agent Systems. 26, (1), 86-119.
Available at: https://ink.library.smu.edu.sg/sis_research/5242
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://doi.org/10.1007/s10458-011-9183-4
Included in
Databases and Information Systems Commons, Programming Languages and Compilers Commons, Software Engineering Commons