Publication Type

Conference Proceeding Article

Publication Date



Taxis (which include cars working with car aggregation systemssuch as Uber, Grab, Lyft etc.) have become a criticalcomponent in the urban transportation. While most researchand applications in the context of taxis have focused on improvingperformance from a customer perspective, in this paper,we focus on improving performance from a taxi driverperspective. Higher revenues for taxi drivers can help bringmore drivers into the system thereby improving availabilityfor customers in dense urban cities.Typically, when there is no customer on board, taxi driverswill cruise around to find customers either directly (on thestreet) or indirectly (due to a request from a nearby customeron phone or on aggregation systems). For such cruising taxis,we develop a Reinforcement Learning (RL) based system tolearn from real trajectory logs of drivers to advise them onthe right locations to find customers which maximize theirrevenue. There are multiple translational challenges involvedin building this RL system based on real data, such as annotatingthe activities (e.g., roaming, going to a taxi stand, etc.)observed in trajectory logs, identifying the right features fora state, action space and evaluating against real driver performanceobserved in the dataset. We also provide a dynamicabstraction mechanism to improve the basic learning mechanism.Finally, we provide a thorough evaluation on a realworld data set from a developed Asian city and demonstratethat an RL based system can provide significant benefits tothe drivers.


Artificial Intelligence and Robotics | Computer Sciences | Transportation

Research Areas

Intelligent Systems and Decision Analytics


Proceedings of the Twenty-Seventh International Conference on Automated Planning and Scheduling ICAPS 2017: Pittsburgh, June 18-23

First Page


Last Page



AAAI Press

City or Country

Menlo Park, CA

Creative Commons License

Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.

Additional URL