Publication Type

Journal Article

Version

acceptedVersion

Publication Date

3-2020

Abstract

Reducing traffic delay is of crucial importance for the development of sustainable transportation systems, which is a challenging task in the studies of stochastic shortest path (SSP) problem. Existing methods based on the probability tail model to solve the SSP problem, seek for the path that minimizes the probability of delay occurrence, which is equal to maximizing the probability of reaching the destination before a deadline (i.e., arriving on time). However, they suffer from low accuracy or high computational cost. Therefore, we design a novel and practical Q-learning approach where the converged Q-values have the practical meaning as the actual probabilities of arriving on time so as to improve the accuracy of finding the real optimal path. By further adopting dynamic neural networks to learn the value function, our approach can scale well to large road networks with arbitrary deadlines. Moreover, our approach is flexible to implement in a time dependent manner, which further improves the performance of returned path. Experimental results on some road networks with real mobility data, such as Beijing, Munich and Singapore, demonstrate the significant advantages of the proposed approach over other methods.

Keywords

Reinforcement learning, Transportation, Arriving on time, Vehicle routing, Q-learning

Discipline

Databases and Information Systems | Transportation

Research Areas

Data Science and Engineering; Intelligent Systems and Optimization

Publication

IEEE Transactions on Vehicular Technology

Volume

69

Issue

3

First Page

2424

Last Page

2436

ISSN

0018-9545

Identifier

10.1109/TVT.2020.2964784

Publisher

Institute of Electrical and Electronics Engineers

Copyright Owner and License

Authors

Additional URL

http://doi.org/10.1109/TVT.2020.2964784

Share

COinS