Publication Type
Conference Proceeding Article
Version
acceptedVersion
Publication Date
5-2021
Abstract
In modular reinforcement learning (MRL), a complex decision making problem is decomposed into multiple simpler subproblems each solved by a separate module. Often, these subproblems have conflicting goals, and incomparable reward scales. A composable decision making architecture requires that even the modules authored separately with possibly misaligned reward scales can be combined coherently. An arbitrator should consider different module’s action preferences to learn effective global action selection. We present a novel framework called GRACIAS that assigns fine-grained importance to the different modules based on their relevance in a given state, and enables composable decision making based on modern deep RL methods such as deep deterministic policy gradient (DDPG) and deep Q-learning. We provide insights into the convergence properties of GRACIAS and also show that previous MRL algorithms reduce to special cases of our framework. We experimentally demonstrate on several standard MRL domains that our approach works significantly better than the previous MRL methods, and is highly robust to incomparable reward scales. Our framework extends MRL to complex Atari games such as Qbert, and has a better learning curve than the conventional RL algorithms.
Keywords
Reinforcement Learning, Coordination and Control, Deep Learning
Discipline
Theory and Algorithms
Publication
Proceedings of the 20th International Conference on Autonomous Agents and Multiagent Systems, Virtual Conference, 2021 May 3-7
City or Country
London, UK
Citation
GUPTA, Vaibhav; ANAND, Daksh; PARACHURI, Praveen; and KUMAR, Akshat.
Action selection for composable modular deep reinforcement learning. (2021). Proceedings of the 20th International Conference on Autonomous Agents and Multiagent Systems, Virtual Conference, 2021 May 3-7.
Available at: https://ink.library.smu.edu.sg/sis_research/6179
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.