Transferring expectations in model-based reinforcement learning

Publication Type

Conference Proceeding Article

Publication Date

12-2012

Abstract

We study how to automatically select and adapt multiple abstractions or representations of the world to support model-based reinforcement learning. We address the challenges of transfer learning in heterogeneous environments with varying tasks. We present an efficient, online framework that, through a sequence of tasks, learns a set of relevant representations to be used in future tasks. Without predefined mapping strategies, we introduce a general approach to support transfer learning across different state spaces. We demonstrate the potential impact of our system through improved jumpstart and faster convergence to near optimum policy in two benchmark domains.

Keywords

Benchmark domains, Faster convergence, General approach, Heterogeneous environments, Mapping strategy, Model-based reinforcement learning, Potential impacts, Transfer learning

Discipline

Numerical Analysis and Scientific Computing

Publication

Advances in Neural Information Processing Systems 25 (NIPS 2012)

Volume

4

First Page

2555

Last Page

2563

ISBN

9781627480031

Publisher

Curran Associates

City or Country

New York, USA

Additional URL

http://api.elsevier.com/content/abstract/scopus_id/84877760554

This document is currently not available here.

Share

COinS