Publication Type
Conference Proceeding Article
Version
publishedVersion
Publication Date
5-2024
Abstract
This paper introduces a method to explain MADRL agents’ behaviors by abstracting their actions into high-level strategies. Particularly, a spatio-temporal neural network model is applied to encode the agents’ sequences of actions as memory episodes wherein an aggregating memory retrieval can generalize them into a concise abstract representation of collective strategies. To assess the effectiveness of our method, we applied it to explain the actions of QMIX MADRL agents playing a StarCraft Multi-agent Challenge (SMAC) video game. A user study on the perceived explainability of the extracted strategies indicates that our method can provide comprehensible explanations at various levels of granularity.
Keywords
Multi-agent Deep Reinforcement Learning; Explainable Artificial Intelligence; Sequential Decision Making
Discipline
Databases and Information Systems
Research Areas
Intelligent Systems and Optimization
Publication
Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2024) : Auckland, New Zealand, May 6-10
First Page
2537
Last Page
2539
ISBN
9798400704864
Publisher
International Foundation for Autonomous Agents and Multiagent Systems
City or Country
Auckland, New Zealand
Citation
KHAING, Phyo Wai; GENG, Minghong; PATERIA, Shubham; SUBAGDJA, Budhitama; and TAN, Ah-hwee.
Explaining sequences of actions in multi-agent deep reinforcement learning models. (2024). Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2024) : Auckland, New Zealand, May 6-10. 2537-2539.
Available at: https://ink.library.smu.edu.sg/sis_research/9783
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Comments
PDF provided by faculty