Transferring expectations in model-based reinforcement learning
Conference Proceeding Article
We study how to automatically select and adapt multiple abstractions or representations of the world to support model-based reinforcement learning. We address the challenges of transfer learning in heterogeneous environments with varying tasks. We present an efficient, online framework that, through a sequence of tasks, learns a set of relevant representations to be used in future tasks. Without predefined mapping strategies, we introduce a general approach to support transfer learning across different state spaces. We demonstrate the potential impact of our system through improved jumpstart and faster convergence to near optimum policy in two benchmark domains.
Benchmark domains, Faster convergence, General approach, Heterogeneous environments, Mapping strategy, Model-based reinforcement learning, Potential impacts, Transfer learning
Numerical Analysis and Scientific Computing
Data Management and Analytics
Advances in Neural Information Processing Systems 25 (NIPS 2012)
City or Country
New York, USA
Nguyen, Trung Thanh; Silander, Tomi; and Tze-Yun LEONG.
Transferring expectations in model-based reinforcement learning. (2012). Advances in Neural Information Processing Systems 25 (NIPS 2012). 4, 2555-2563. Research Collection School Of Information Systems.
Available at: http://ink.library.smu.edu.sg/sis_research/3049
This document is currently not available here.