定向运动
强化学习
计算机科学
嵌入
旅行商问题
背景(考古学)
规范化(社会学)
机器学习
节点(物理)
自适应路由
车辆路径问题
人工智能
布线(电子设计自动化)
数学优化
数学
算法
链路状态路由协议
路由协议
计算机网络
生物
工程类
社会学
古生物学
结构工程
人类学
作者
Yunqiu Xu,Meng Fang,Ling Chen,Gangyan Xu,Yali Du,Chengqi Zhang
出处
期刊:IEEE transactions on cybernetics
[Institute of Electrical and Electronics Engineers]
日期:2021-07-08
卷期号:52 (10): 11107-11120
被引量:47
标识
DOI:10.1109/tcyb.2021.3089179
摘要
In this article, we study the reinforcement learning (RL) for vehicle routing problems (VRPs). Recent works have shown that attention-based RL models outperform recurrent neural network-based methods on these problems in terms of both effectiveness and efficiency. However, existing RL models simply aggregate node embeddings to generate the context embedding without taking into account the dynamic network structures, making them incapable of modeling the state transition and action selection dynamics. In this work, we develop a new attention-based RL model that provides enhanced node embeddings via batch normalization reordering and gate aggregation, as well as dynamic-aware context embedding through an attentive aggregation module on multiple relational structures. We conduct experiments on five types of VRPs: 1) travelling salesman problem (TSP); 2) capacitated VRP (CVRP); 3) split delivery VRP (SDVRP); 4) orienteering problem (OP); and 5) prize collecting TSP (PCTSP). The results show that our model not only outperforms the learning-based baselines but also solves the problems much faster than the traditional baselines. In addition, our model shows improved generalizability when being evaluated in large-scale problems, as well as problems with different data distributions.
科研通智能强力驱动
Strongly Powered by AbleSci AI