Publication Type
Journal Article
Version
acceptedVersion
Publication Date
9-2022
Abstract
Recent studies in using deep learning to solve routing problems focus on construction heuristics, the solutions of which are still far from optimality. Improvement heuristics have great potential to narrow this gap by iteratively refining a solution. However, classic improvement heuristics are all guided by hand-crafted rules which may limit their performance. In this paper, we propose a deep reinforcement learning framework to learn the improvement heuristics for routing problems. We design a self-attention based deep architecture as the policy network to guide the selection of next solution. We apply our method to two important routing problems, i.e. travelling salesman problem (TSP) and capacitated vehicle routing problem (CVRP). Experiments show that our method outperforms state-of-theart deep learning based approaches. The learned policies are more effective than the traditional hand-crafted ones, and can be further enhanced by simple diversifying strategies. Moreover, the policies generalize well to different problem sizes, initial solutions and even real-world dataset.
Keywords
Routing, Heuristic algorithms, Vehicle routing, Traveling salesman problems, Training, Task analysis, Search problems
Discipline
OS and Networks
Research Areas
Intelligent Systems and Optimization
Publication
IEEE Transactions on Neural Networks and Learning Systems
Volume
33
Issue
9
First Page
5057
Last Page
5069
ISSN
2162-237X
Identifier
10.1109/TNNLS.2021.3068828
Publisher
Institute of Electrical and Electronics Engineers
Citation
WU, Yaoxin; SONG, Wen; CAO, Zhiguang; ZHANG, Jie; and LIM, Andrew.
Learning improvement heuristics for solving routing problems. (2022). IEEE Transactions on Neural Networks and Learning Systems. 33, (9), 5057-5069.
Available at: https://ink.library.smu.edu.sg/sis_research/8129
Copyright Owner and License
Authors
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://doi.org/10.1109/TNNLS.2021.3068828