强化学习
水准点(测量)
马尔可夫决策过程
计算机科学
任务(项目管理)
运筹学
收入
订单(交换)
增强学习
过程(计算)
服务(商务)
马尔可夫链
布线(电子设计自动化)
人工智能
马尔可夫过程
数学优化
机器学习
工程类
经济
营销
业务
数学
计算机网络
财务
会计
操作系统
统计
系统工程
地理
大地测量学
作者
Aysun Bozanta,Mücahit Çevik,Can Kavaklioğlu,Eray Mert Kavuk,Ayşe Tosun,Sibel B. Sonuç,Alper Duranel,Ayşe Bener
标识
DOI:10.1016/j.cie.2021.107871
摘要
We consider a Markov decision process model mimicking a real-world food delivery service where the objective is to maximize the revenue derived from served requests given a limited number of couriers over a period of time. The model incorporates the courier location, order origin, and order destination. Each courier’s task is to pick-up an assigned order and deliver it to the requested destination. We apply three different approaches to solve this problem. In the first approach, we simplify the model to a one courier case and then solve the resulting model using Q-Learning. The resulting policy is used for each courier in the model with more than one courier based on the assumption that all couriers are identical. In the second approach, we use the same logic, however, the underlying one courier model is solved using Double Deep Q-Networks (DDQN). In the third approach, the extensive model is considered where a system state consists of the positions of all couriers and all orders in the system. We use DDQN to solve the extensive model. Policies generated by these approaches are compared against a benchmark rule-based policy. We observe that the resulting policy of training a single courier with Q-learning accumulates higher rewards than the reward collected by the rule-based policy. In addition, DDQN algorithm for a single courier outperforms both the Q-learning and the rule-based approaches, however, DDQN performance is noted to be highly dependent on the hyper-parameters of the algorithm.
科研通智能强力驱动
Strongly Powered by AbleSci AI