Publication Type

Conference Proceeding Article

Version

publishedVersion

Publication Date

5-2021

Abstract

In modular reinforcement learning (MRL), a complex decision making problem is decomposed into multiple simpler subproblems each solved by a separate module. Often, these subproblems have conflicting goals, and incomparable reward scales. A composable decision making architecture requires that even the modules authored separately with possibly misaligned reward scales can be combined coherently. An arbitrator should consider different module's action preferences to learn effective global action selection. We present a novel framework called GRACIAS that assigns fine-grained importance to the different modules based on their relevance in a given state, and enables composable decision making based on modern deep RL methods such as deep deterministic policy gradient (DDPG) and deep Q-learning. We provide insights into the convergence properties of GRACIAS and also show that previous MRL algorithms reduce to special cases of our framework. We experimentally demonstrate on several standard MRL domains that our approach works significantly better than the previous MRL methods, and is highly robust to incomparable reward scales. Our framework extends MRL to complex Atari games such as Qbert, and has a better learning curve than the conventional RL algorithms.

Keywords

Reinforcement learning; Coordination and control; Deep learning

Discipline

Databases and Information Systems

Research Areas

Data Science and Engineering

Publication

Conference of the International Conference on Autonomous Agents and Multiagent Systems, Virtual Online, May 3-7

First Page

565

Last Page

573

Publisher

IFAAMAS

City or Country

United Kingdom

Share

COinS