Publication Type

Conference Proceeding Article

Version

publishedVersion

Publication Date

5-2024

Abstract

Cooperative multi-agent reinforcement learning methods aim to learn effective collaborative behaviours of multiple agents performing complex tasks. However, existing MARL methods are commonly proposed for fairly small-scale multi-agent benchmark problems, wherein both the number of agents and the length of the time horizons are typically restricted. My initial work investigates hierarchical controls of multi-agent systems, where a unified overarching framework coordinates multiple smaller multi-agent subsystems, tackling complex, long-horizon tasks that involve multiple objectives. Addressing another critical need in the field, my research introduces a comprehensive benchmark for evaluating MARL methods in long-horizon, multi-agent, and multi-objective scenarios. This benchmark aims to fill the current gap in the MARL community for assessing methodologies in more complex and realistic scenarios. My dissertation would focus on proposing and evaluating methods for scaling up multi-agent systems in two aspects: structural-wise increasing the number of reinforcement learning agents and temporal-wise extending the planning horizon and complexity of problem domains that agents are deployed in.

Keywords

Multi-agent Reinforcement Learning, Scaling up MARL, Long-horizon MARL, Hierarchical Multi-agent Systems, Task Decomposition, Multi-agent learning, Reinforcement learning, Scalability, Collective learning

Discipline

Databases and Information Systems

Research Areas

Intelligent Systems and Optimization

Publication

Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems

First Page

2737

Last Page

2739

ISBN

9798400704864

Publisher

International Foundation for Autonomous Agents and Multiagent Systems

City or Country

Richland, SC

Comments

PDF provided by Author

Share

COinS