Transferring expectations in model-based reinforcement learning
Publication Type
Conference Proceeding Article
Publication Date
12-2012
Abstract
We study how to automatically select and adapt multiple abstractions or representations of the world to support model-based reinforcement learning. We address the challenges of transfer learning in heterogeneous environments with varying tasks. We present an efficient, online framework that, through a sequence of tasks, learns a set of relevant representations to be used in future tasks. Without predefined mapping strategies, we introduce a general approach to support transfer learning across different state spaces. We demonstrate the potential impact of our system through improved jumpstart and faster convergence to near optimum policy in two benchmark domains.
Keywords
Benchmark domains, Faster convergence, General approach, Heterogeneous environments, Mapping strategy, Model-based reinforcement learning, Potential impacts, Transfer learning
Discipline
Numerical Analysis and Scientific Computing
Publication
Advances in Neural Information Processing Systems 25 (NIPS 2012)
Volume
4
First Page
2555
Last Page
2563
ISBN
9781627480031
Publisher
Curran Associates
City or Country
New York, USA
Citation
Nguyen, Trung Thanh; Silander, Tomi; and Tze-Yun LEONG.
Transferring expectations in model-based reinforcement learning. (2012). Advances in Neural Information Processing Systems 25 (NIPS 2012). 4, 2555-2563.
Available at: https://ink.library.smu.edu.sg/sis_research/3049
Additional URL
http://api.elsevier.com/content/abstract/scopus_id/84877760554