Publication Type

Conference Proceeding Article

Version

publishedVersion

Publication Date

5-2016

Abstract

We consider the problem of trajectory prediction, where a trajectory is an ordered sequence of location visits and corresponding timestamps. The problem arises when an agent makes sequential decisions to visit a set of spatial locations of interest. Each location bears a stochastic utility and the agent has a limited budget to spend. Given the agent's observed partial trajectory, our goal is to predict the remaining trajectory. We propose a solution framework to the problem considering both the uncertainty of utility and the budget constraint. We use reinforcement learning (RL) to model the underlying decision processes and inverse RL to learn the utility distributions of the locations. We then propose two decision models to make predictions: one is based on long-term optimal planning of RL and another uses myopic heuristics. We finally apply the framework to predict real-world human trajectories and are able to explain the underlying processes of the observed actions.

Keywords

reinforcement learning, budget constraint, stochastic utility, markov decision process, sequential decisions, trajectory prediction

Discipline

Artificial Intelligence and Robotics | Computer Sciences | Operations Research, Systems Engineering and Industrial Engineering

Research Areas

Intelligent Systems and Optimization

Publication

AAMAS '16: Proceedings of the 15th International Conference on Autonomous Agents and Multiagent Systems, Singapore, May 9-13, 2016

First Page

1449

Last Page

1450

Publisher

IFAAMAS

City or Country

Ann Arbor, MI

Share

COinS